The Problem
Most engineering teams don't use a single issue tracker. They use two — or three. Jira for product, Azure DevOps for engineering, maybe GitHub Issues for open source. Context switching between platforms is constant, and the same issue often gets created in multiple systems without anyone realising it.
Flowlenz POC was built to solve that: one dashboard, all your tickets, regardless of source — with AI analysis layered on top.
What Flowlenz Does
At its core, Flowlenz continuously scrapes both Jira and Azure DevOps, normalises every ticket into a single unified data model, and presents them side by side in a React frontend. But the interesting parts are what happens beyond that:
- Duplicate and similarity detection — tickets from different sources are compared and scored for similarity. A Jira bug and an Azure DevOps work item describing the same problem get linked automatically with a similarity score
- AI analysis — each ticket gets an estimated effort time, an auto-generated summary, and a best-assignee recommendation based on historical assignment patterns
- Incremental sync — neither scraper does a full re-fetch. Each maintains a cursor tracking the last sync timestamp, so only changed issues are processed on subsequent runs
- Advanced search and filtering — case-insensitive search across all sources, filter by status, sort by date, priority, or title, configurable page sizes
Architecture Overview
The system is composed of four services sharing a single PostgreSQL database:
- TicketSystem.API — ASP.NET Core 9 REST API serving the React frontend, handling all reads and exposing Swagger documentation
- JiraScraper — a .NET 9 hosted background service that continuously polls the Jira REST API and upserts into the shared tickets table
- AzureDevOpsScraper — same pattern for Azure DevOps work items
- React frontend — React 18 with TypeScript and Tailwind CSS, communicating with the API via Axios
All three backend services share the same PostgreSQL database. The two scrapers run as independent background workers — they don't communicate with each other, they just write to the same tickets table. The API reads from that table and serves results to the frontend.
Tech Stack
- Backend API: ASP.NET Core 9.0, Entity Framework Core 9, Swagger/OpenAPI
- Background services: .NET 9 hosted services (IHostedService)
- Frontend: React 18, TypeScript 4.7, Tailwind CSS 3.4, Axios
- Database: PostgreSQL 15+ with Npgsql
- Deployment: Docker Compose — entire stack starts with a single command
The Jira Integration: Cursor-Based Incremental Sync
The Jira scraper uses cursor-based incremental sync rather than fetching all issues every run. It stores a LastUpdatedUtc timestamp per Jira project in the jira_sync_cursors table. On each sync cycle, it queries Jira's REST API for issues updated after that cursor, with a 2-minute overlap buffer to catch any edge cases.
Auth is Basic auth using a Jira API token (email + Base64-encoded token), which is the standard approach for Jira Cloud. The scraper handles pagination transparently using Jira's nextPage tokens — so a project with 2,000 issues syncs correctly without any manual page management.
Tickets are upserted by issue key. If the same issue key already exists in the database, the latest version wins. This makes the sync idempotent — running it twice produces the same result.
The Azure DevOps Integration: Polling With Retry
The Azure DevOps scraper uses Personal Access Token (PAT) authentication and continuously polls for work items at the project level. Unlike the Jira scraper's cursor approach, the Azure DevOps scraper uses a polling interval with three automatic retries on failure and a 5-minute cooldown between retry attempts.
This difference in strategy reflects the APIs themselves — Jira's API makes cursor-based incremental sync straightforward, while Azure DevOps's work item query API is better suited to a polling approach at this integration layer.
Database Schema
The schema is designed around the tickets table as the central entity, with satellite tables for AI analysis and similarity relationships:
- tickets — the unified ticket model with fields common to both Jira and Azure DevOps: key, title, description, status, priority, source, complexity, assignee, sprint, team, keywords, and a
dirtyflag for change tracking - ticket_ai — a 1:1 relationship with tickets, storing AI-generated fields: estimated time, summary, and best assignee recommendation
- similar_tickets — a many-to-many self-join on the tickets table, storing pairs of similar tickets with a similarity score
- jira_sync_cursors — one row per Jira project, tracking the last sync timestamp with a row version for optimistic concurrency
The Unified API
The API exposes a clean paginated endpoint at GET /api/tickets that accepts query parameters for page, page size, search, status filter, and sort order. The response includes pagination metadata so the frontend can render page controls without a separate count query:
{
"tickets": [...],
"totalCount": 142,
"page": 1,
"pageSize": 20,
"hasNext": true,
"hasPrevious": false
}
Search uses PostgreSQL's ILIKE operator for case-insensitive matching across ticket key and title. A 300ms debounce on the frontend prevents excessive API calls as the user types.
The GET /api/tickets/{id} endpoint returns a single ticket enriched with its AI analysis from ticket_ai and a list of similar tickets from the similar_tickets table — all surfaced in a detail modal on the frontend.
Docker Deployment
The entire stack — API, both scrapers, frontend, and PostgreSQL — runs with a single command:
docker compose -f docker-compose.prod.yml up
This was a deliberate design choice. Demo environments, staging, and client handoffs all start from the same command. No dependency installation steps, no manual database setup — the compose file handles everything including the initial schema migration on first run.
AI Features: What's Built vs What's Planned
The AI fields — estimated time, summary, best assignee — are stored in the ticket_ai table and surfaced in the frontend. The storage and retrieval layer is complete. The AI generation pipeline itself (the model that produces these values) is a separate concern outside this POC's scope, which is an intentional architectural decision: the data layer doesn't care how the AI fields are populated, only that they conform to the schema.
This separation means you can plug in any AI backend — OpenAI, a fine-tuned local model, a rule-based heuristic system — without touching the API or frontend.
Similarly, similarity scores are stored in similar_tickets and shown in the UI. The similarity computation engine is a separate service that can be swapped independently.
Key Design Decisions
- Shared database, independent scrapers — each scraper writes independently, avoiding the complexity of inter-service communication. The API reads the merged result naturally
- Cursor-based sync for Jira — fetching only changed tickets is critical for large Jira projects where a full re-fetch would be too slow and hit API rate limits
- Upsert by source key — makes syncs idempotent, which matters for a background service that runs continuously
- Dirty flag on tickets — lets the similarity and AI services identify tickets that have changed since their last analysis run, without requiring a full table scan
- Docker-first deployment — a POC that's hard to run is a POC nobody runs. Single-command startup was a hard requirement
Key Lessons
- Normalising data from multiple external APIs into a single model is harder than it looks — field names, status values, and priority scales all differ between Jira and Azure DevOps
- Cursor-based incremental sync is worth the added complexity for any integration that needs to stay current with large datasets
- Separating AI storage from AI generation keeps the system flexible — the storage layer ships while the AI pipeline is still being developed
- PostgreSQL's
ILIKEwith a proper index is fast enough for search on a reasonably sized tickets table without needing a dedicated search engine - Docker Compose multi-service setups need explicit health checks and startup ordering — the API should not start before the database is ready
What's Next
The POC validates the core architecture. The natural next steps are: plugging in the AI generation pipeline, adding authentication so teams can manage their own API credentials, building the similarity computation engine, and scaling the sync frequency based on project activity levels.
If you're building a tool that integrates across multiple project management platforms, or need a custom aggregation layer for your engineering workflow, get in touch.