Quickstart
This guide walks you through loading a dataset and running your first Cypher query.
Prerequisites
Build TurboLynx first: see Building TurboLynx.
Step 1 — Import a Dataset
turbolynx import \
--workspace /path/to/db \
--nodes Person data/Person.csv \
--nodes Comment data/Comment.csv \
--relationships KNOWS data/Person_knows_Person.csv
--workspace— directory where the database is stored (catalog.bin,store.db,.store_meta)--nodes <Label> <file>— vertex CSV file (repeatable)--relationships <TYPE> <file>— edge CSV file (repeatable)
For CSV format details, see Data Import.
A sample LDBC SNB SF1 dataset (pipe-separated CSV with typed headers, ready for turbolynx import) is hosted on Hugging Face:
# Download via huggingface-cli
pip install huggingface_hub
huggingface-cli download HuggignHajae/TurboLynx-LDBC-SF1 --repo-type dataset --local-dir ./ldbc-sf1
Or browse the files directly: HuggignHajae/TurboLynx-LDBC-SF1.
Step 2 — Open the Shell
Step 3 — Run a Query
Step 4 — Build Statistics (first time only)
Statistics are required for the ORCA cost-based optimizer to generate optimal plans. Run .analyze after loading data:
Run a Single Query Non-Interactively
Combine with --mode to control output format:
turbolynx shell --workspace /path/to/db \
--mode csv \
--query "MATCH (n:Person) RETURN n.firstName, n.lastName;"
Example: Hop Query
MATCH (a:Person)-[:KNOWS]->(b:Person)
WHERE a.firstName = 'Alice'
RETURN b.firstName, b.lastName
LIMIT 20;
Next Steps
- CLI reference — all shell options and dot commands
- Data Import formats
- Supported Cypher syntax