Machine Learning used as early as the 1950s
Models trained for very specific use cases
Transformer architecture changed everything
Today: models trained on huge amounts of data (task-based training not needed)
Enhance personal AI usage with tools (💘MCP)
How do we move beyond personal use of AI?
Incorporate AI as a system component!
Most software isn't AI-based and shouldn't be
Q: When does AI make sense as a system component?
A: Whenever you need a skilled system user behind an API
Domain expert access
Rapidly testable component
Easily upgraded (reference a new model)
It is a fast, competent and available system user
(very capable but also non-deterministic)
Work out where system users can add value
Use validation, guardrails, clear communication
Human user: user(wait) → frontend → backend
AI user: backend → AI(prompt)
Combined: human → frontend → backend → AI
Future state? human → AI → backend 🤔
Ingest: Gather the necessary data
Process: Transform data into usable format
Prompt: Construct effective prompts + tools
Record: Store results and metadata
Report: aggregate/summarise results, iterate!
None of them are AI products
Startups through to enterprise
All used AI as a key system component
Problem: SaaS provider with lots of easy support requests
Solution: Integrate with helpdesk, reply with relevant video or tutorial
Tech: API.ai (2017, later DialogFlow), needed training
Outcome: over 10% of requests replied to with useful content, but about the same % of garbage
Reponse quality limited to quality of knowledge base 🗑️⬇️ 🗑️⬆️
Most requests still needed a human in the loop (this type of use-case should be more successfuly present-day)
Correct responses was not enough, avoiding incorrect responses was as important
Problem: Large insurance provider needed validation for aging claims system
Solution: Take a large anonymised snapshot, follow I-P-P-R-R to gather meaningful insights
Tech: OpenAI model on Azure
Outcome: An automated review tool that could be powered by non-technical staff
Operating on complex/messy data is much harder than AI integration
(database was optimised for storage limits of the 1970s)
There is no substitution for deeply understanding a problem domain to craft an effective solution
At one stage a single excecution could have spent up to $50,000,000 (500x our budget)
Big O notation matters!
Problem: billing provider needed data from 1000s of pages, with skilled human oversight
Solution: AI-powered OCR as an automated form-prefill
Tech: Claude via Bedrock & Poppler (PDF library)
Outcome: Higher quality OCR than dedicated OCR teams were able to offer
Combination of printed and (mostly doctor 😅) handwriting is not easy to work with
Pre-filling can be more valuable than complete automation
AI can build its own guardrails! We scraped all the MBS (Medicare Benefits Schedule) rules, over 13k
Problem: Customer had an aggregated repository of brand reviews with personal data
Solution: Self-hosted LLM to remove personal data from reviews
Tech: n/a
Outcome: Client said no ☹️
Many people/organisations are reluctant to use AI as a solution component
3 broad tiers of AI deployment options (convenience vs security)
Problem: Multi-month schedule (1000s of transport events), 100s of soft/hard constraints
Solution: Create rule structures, get AI to fill them in with human approval
Tech: Bedrock, rules engine
Outcome: AI as a rule generator rather than a rule executor
Context window limits and token usage needs to be considered carefully
Non-deterministic actors can build great deterministic solutions
Human users reviewing system users is a great synergy
Build in AI as the engine, not the whole product
Remember to Ingest, Process, Prompt, Record, Report
AI as a system component is mostly an API + communication skills
AI can empower humans rather than replace them
Thank you for your time!
Contact me at simon@terem.com.au
Scan to visit this presentation