Yeah, it seems likely you’ll end up with 2 or 3 different store/query mechanisms. Something fairly flat and transactional-ish (best-efforts probably fine, not long-disconnected edit resolution) for interactive edits, something for search/traversal (which will vary widely based on the depth of the traversals, the cardinality of the graph, etc. Could be a denormalized schema in the same DBM or.a different DBM). And perhaps a caching layer for low-latency needs (maybe not a different store/query, but just results caching somewhere). And perhaps an analytics store for asynchronous big-data processing.
Honestly, even if this is pretty big in scope, I’d prototype with Mongo or DynamoDB as my primary store (or a SQL store if you’re into that), using simple adjacency tables for the graph connections. Then either layer a GraphQL processor directly or on a replicated/differently-normalized store.
Yeah, it seems likely you’ll end up with 2 or 3 different store/query mechanisms. Something fairly flat and transactional-ish (best-efforts probably fine, not long-disconnected edit resolution) for interactive edits, something for search/traversal (which will vary widely based on the depth of the traversals, the cardinality of the graph, etc. Could be a denormalized schema in the same DBM or.a different DBM). And perhaps a caching layer for low-latency needs (maybe not a different store/query, but just results caching somewhere). And perhaps an analytics store for asynchronous big-data processing.
Honestly, even if this is pretty big in scope, I’d prototype with Mongo or DynamoDB as my primary store (or a SQL store if you’re into that), using simple adjacency tables for the graph connections. Then either layer a GraphQL processor directly or on a replicated/differently-normalized store.