source of geminispace.info - the search provider for gemini space
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
René Wagner b30a30afe9 add link to source in geminispace 3 days ago
docs don't persist robots.txt over multiple crawls 3 months ago
gus more meta data for index cleanup 3 days ago
infra use cronjob for automated start 6 days ago
scripts [threads] Only work with textual pages 1 year ago
serve add link to source in geminispace 3 days ago
tests/gus support prioritized robots.txt user-agents 3 months ago
.git-blame-ignore-revs Add .git-blame-ignore-revs file 12 months ago
.gitignore some cleanup 1 month ago
LICENSE Add GUS licence 1 year ago
README.md use cronjob for automated start 6 days ago
logging.ini more excludes and less logging 3 months ago
poetry.lock more excludes 2 months ago
pyproject.toml move gusmobile to new home 3 months ago

README.md

Gemini Universal Search (GUS)

Dependencies

  1. Install python (>3.5) and poetry
  2. Run: poetry install

Making an initial index

Make sure you have some gemini URLs for testing which are nicely sandboxed to avoid indexing huge parts of the gemini space.

  1. Create a "seed-requests.txt" file with you test gemini URLs
  2. Run: poetry run crawl -d
  3. Run: poetry run build_index -d

Now you'll have created index.new directory, rename it to index.

Running the frontend

  1. Run: poetry run serve
  2. Navigate your gemini client to: "gemini://localhost/"

Running the frontend in production with systemd

  1. update infra/gus.service to match your needs (directory, user)
  2. copy infra/gus.service to /etc/systemd/system/
  3. run systemctl enable gus and systemctl start gus

Running the crawl to update the index

  1. Run: poetry run crawl
  2. Run: poetry run build_index
  3. Restart frontend

Running the crawl & indexer in production with systemd

  1. update infra/gus-crawl.service & infra/gus-index.service to match your needs (directory, user)
  2. copy both files to /etc/systemd/system/
  3. set up a cron job for root with the following params: 0 9 */3 * * systemctl start gus-crawl --no-block

Running the test suite

Run: poetry run pytest

Roadmap / TODOs

  • TODO: add functionality to create a mock index
  • TODO: exclude raw-text blocks from indexed content
  • TODO: strip control characters from logged output like URLs