Key Impact
- •Time-to-insight reduced from hours to seconds
- •Business users query 4,000+ row property datasets directly
- •Self-correcting AI agent auto-recovers from errors in 1-3 iterations
- •Delivered in 6 weeks; client integrated into their B2B offering

The Problem
The client manages a global portfolio of commercial real estate assets worth billions. Their business users (asset managers, analysts, investor relations) constantly needed data insights:
- "What's the average occupancy across our NYC office properties?"
- "Show me properties with leases expiring in the next 18 months"
- "Compare NOI trends across our retail vs. industrial portfolio"
The Solution
I built a conversational AI interface that lets business users query their property data in natural language. No SQL required, no waiting for data engineering.
How It Works
Natural Language to Analysis
Users type questions in plain English. The system translates their intent into Python/SQL queries, executes against the property database, and returns visualizations and insights.
Self-Correcting Agent
Here's where it gets interesting. LLMs make mistakes. They generate queries with syntax errors, reference wrong columns, or misunderstand the schema. I engineered a self-correcting agent that:
1. Generates initial query based on user question 2. Executes and catches any errors 3. Analyzes the error message and schema context 4. Regenerates a corrected query 5. Repeats until success (typically 1-3 iterations)
Few-Shot Learning
When a query succeeds, the system captures the question-to-query mapping as a few-shot example. Over time, the system builds a library of successful patterns specific to this client's data, improving accuracy and reducing iterations.
Technical Architecture
Stack:
- React frontend with conversational UI
- Flask backend orchestrating the AI agent
- Azure SQL Database holding property data (4,000+ rows across multiple tables)
- Azure App Service for deployment
- Okta SSO for enterprise authentication
- Agentic Architecture: The LLM doesn't just generate SQL. It reasons about the data model, handles errors, and iterates toward correct answers
- Schema Context Injection: Full database schema and sample data injected into prompts for grounding
- Guardrails: Read-only access, query timeouts, and result size limits prevent runaway queries
- Explanation Layer: System explains its reasoning, so users understand (and can correct) the analysis
The 6-Week Sprint
This project had aggressive timelines. The client needed a working demo for an upcoming investor meeting. We delivered:
Week 1-2: Schema analysis, data profiling, initial agent architecture
Week 3-4: Core conversational interface, self-correction logic, error handling
Week 5: SSO integration, security review, performance optimization
Week 6: User testing, refinements, production deployment
Results
The platform eliminated the data engineering bottleneck entirely. Business users now get answers in seconds instead of days. The client was so impressed they integrated the technology into their own B2B offering for their investors.
Key wins:
- Hours to seconds: Analysts get insights immediately
- Self-service analytics: No more tickets to data engineering
- B2B integration: Client monetizing the capability with their own customers
- 6-week delivery: From kickoff to production in aggressive timeline