A 50-person engineering company had an IT support problem that sounds familiar: tickets were falling through the cracks, engineers were spending time routing requests manually, and nobody could tell you at any moment how many tickets were breaching SLA. The tracking system was a spreadsheet updated once a day — by hand.
Six weeks later: automated routing on 98% of tickets, real-time SLA breach alerts in Slack, and first-response time down from 8 hours to 2.8 hours. Here's exactly what we built and how.
The Starting Point: What Was Broken
Before automation, the IT queue had three problems compounding each other:
- No routing rules. Every ticket landed in one queue. An engineer would triage manually — is this IT, HR, or a dev-tools request? — and reassign. This added 45–90 minutes to every ticket's clock before anyone even looked at it.
- SLA tracking was manual. Someone checked the spreadsheet each morning and flagged breaches after the fact. By then, the damage was done.
- No visibility for requesters. Users submitted tickets into a void. No confirmation, no status updates, no ETA. They followed up in Slack, which created a second informal queue nobody tracked.
The goal wasn't to replace Jira — they were already invested in it. The goal was to make Jira actually enforce the SLAs they had defined but never automated.
The Architecture: Three Layers
Layer 1 — AI-Powered Ticket Classification
Every new ticket goes through a classifier before it's assigned to anyone. The classifier reads the summary and description and outputs a category: IT Hardware, Software Access, Dev Tools, HR Systems, or Unknown.
def classify_ticket(summary: str, description: str) -> str:
prompt = f"""Classify this IT support ticket into one of:
IT_HARDWARE, SOFTWARE_ACCESS, DEV_TOOLS, HR_SYSTEMS, UNKNOWN
Summary: {summary}
Description: {description}
Respond with only the category name."""
result = llm.complete(prompt)
return result.strip()
Accuracy on this team's ticket vocabulary: 96%. The 4% that come back as UNKNOWN get flagged for manual triage — but that's 4 tickets per 100, not 100.
Layer 2 — Jira Automation Rules
Once classified, Jira automation rules handle assignment and SLA clock start. Each category maps to an assignee group and a priority tier:
| Category | Assignee Group | P1 SLA | P2 SLA |
|---|---|---|---|
| IT Hardware | IT Team | 2h first response | 8h first response |
| Software Access | IT Team | 4h first response | 24h first response |
| Dev Tools | Engineering Ops | 4h first response | 24h first response |
| HR Systems | HR | 8h first response | 48h first response |
Jira's built-in automation rules handle the assignment. The AI classifier sets a custom field (tc_category), and a Jira automation rule fires on that field change and routes accordingly. No webhook server required — just Jira's native automation engine.
Layer 3 — SLA Breach Alerts via Slack
This is where teams usually stall. Jira does have SLA tracking built in, but the alerts are email-only and land in inboxes nobody checks promptly. We moved the alerts to Slack:
def check_sla_breaches():
# Query Jira for tickets approaching SLA breach
jql = 'project = IT AND "Time to first response" < 30m AND status = Open'
tickets = jira.search_issues(jql)
for ticket in tickets:
slack.post_message(
channel="#it-alerts",
text=f":warning: SLA breach in 30min: {ticket.key} — {ticket.summary}\n"
f"Assigned to: {ticket.assignee}\n"
f"<{ticket.permalink()}|View in Jira>"
)
This runs every 15 minutes via a cron job. The assignee sees the alert before the breach, not after. The signal-to-noise ratio matters here: we alert only on tickets within 30 minutes of breach, not on all open tickets. Teams that alert too broadly train themselves to ignore it.
Results After 6 Weeks
The most valuable metric wasn't response time — it was visibility. The team lead now has a live dashboard. Breaches are caught before they happen. And the informal Slack queue (where users were following up) dropped by over 80% because requesters get an automatic status update when their ticket is assigned.
What This Doesn't Solve
Automation handles routing and alerting well. It doesn't fix:
- Understaffing. If your team genuinely can't handle the volume, faster routing just surfaces the problem more clearly.
- Bad ticket quality. If requesters submit one-line tickets with no context, the AI classifier still gets confused. We added a required template in Jira to capture category hints — this raised classification accuracy from 91% to 96%.
- SLA definitions that don't match reality. The hardest part of this project was the first week: agreeing on what the SLAs should actually be. The technology took 3 weeks; the alignment took longer.
How Long Does This Take to Build?
For a team of this size (50 people, ~200 tickets/month), the full implementation — classifier, routing rules, Slack alerts, reporting dashboard — takes about 3 weeks. Most of that time is calibrating the classifier vocabulary and configuring Jira's SLA clocks, not writing code.
If you're starting from scratch with no Jira Service Management license, add a week for setup and migration. If you already have JSM and just need the AI layer and Slack integration, it's closer to 2 weeks.
The ongoing maintenance is near zero. The classifier model is static (we fine-tuned once on historical tickets). The automation rules are Jira-native. The cron job runs on DigitalOcean for €5/month.