The Smart Thing: Building a Real Codebase Graph
Greptile does something genuinely clever that most AI code review tools don't bother with — it builds a complete graph of your codebase instead of just doing semantic search. This matters more than it sounds like it should.
When you ask "where is this function used?" naive semantic search gives you a mess of false positives. It finds comments that mention the function name, test descriptions, documentation fragments, and maybe the actual call sites buried somewhere in there. Greptile's approach creates per-function chunks and maps the actual dependency relationships, which means it can trace impact and find every usage site accurately. The result is reviews that understand context — not just what changed, but what else depends on it and what patterns already exist in your codebase.
This is the right technical bet. Code isn't just text to search through; it's a connected system. Treating it that way produces better signal.
The Friction: Configuration Is Still a Deployment
Here's where things get bumpy. Configuration changes require committing a file and pushing it, then waiting to see if your rules actually work on the next PR. There's no preview mode, no dry run, nothing that shows you "here's what will happen when you push this." You're flying blind until the next review runs.
This gets worse with the cascading folder model, which is smart for multi-team orgs but complex to reason about. Rules can exist at the org level, in .greptile/ folders, and in greptile.json files, with overrides layering on top of each other. Documentation explicitly calls out the lack of visibility into which rules actually apply to a specific file. You have to manually resolve the entire config in your head.
What makes this sting is that configuration is the control surface. If you want Greptile to stop being noisy about a certain type of issue, or focus more on security patterns, or ignore test files — all of that requires config changes. When those changes feel risky to make, people either tolerate noise or give up on customization entirely. A real-time preview showing "here's the merged config for this file path" would fix this completely.
The Underused Superpower: Learning From Feedback
Greptile learns your team's coding standards from thumbs up/down reactions on PR comments. Give it 2-3 weeks of consistent feedback and it adapts noticeably — focusing on what matters to you, skipping what doesn't. This is a legitimately differentiating feature. Generic review tools stay generic forever.
The problem is that users aren't providing enough reactions for the system to learn well. Low reaction counts limit effectiveness, and the documentation is clear about this. People either forget to react, don't realize it matters, or find it too effortful to add the explanatory context that makes feedback useful.
What would help here is making feedback less ambient and more structured. Instead of hoping users remember to react, surface 5 random recent comments each week in a dashboard panel with targeted questions: "Was this useful? If no, was it off-topic, too nitpicky, or wrong?" Reduce the cognitive load. Make it feel like training, not just reacting. The learning system is there — it just needs more signal to work with.
The 1-2 hour initial indexing delay is another friction point worth mentioning. For someone evaluating the tool, that's a long wait before seeing any value. A hybrid approach that indexes recently-touched files first (to enable a quick initial review) while building the full graph in the background would dramatically improve the first-run experience.
The Bottom Line
Greptile's technical foundation is strong. The graph-based context model is the right way to do this, and the learning system creates a path toward truly personalized reviews. But setup friction and underutilized feedback loops are holding it back from feeling seamless. These aren't unsolvable problems — they're just the next layer of polish.
We used Mimir to pull this analysis together from Greptile's public docs, support content, and positioning. If you're curious about the full breakdown, the detailed teardown has all the specifics.
