A Practical Guide to Reducing Latency and Costs in Agentic AI Applications
Scaling companies that are actively integrating Large Language Models (LLMs) into their agentic AI products are likely to face two significant challenges—increasing latency and costs: increasing traffic and long prompts lead to slower response times (latency) from LLMs, this can negatively impact user experience and sales, and application costs increase exponentially as API (Application Programming…
The Nitty-gritty of Fine-tuning a GenAI Model
We’ve all heard about how generative AI is changing almost every aspect…
How Georgian’s AI team supports companies in adopting GenAI
Generative AI is redefining businesses with its capacity to write text, generate code,…
Purpose Annual Report 2022
This report showcases outcomes from our climate strategy and latest ESG report.
Generative AI: Opportunities and Challenges for Startups
As we discussed in our previous post, the sudden rise of generative AI…
Cybersecurity Lessons Learned Using Machine Learning for Anomaly Detection
At Georgian, we invest in high-growth technology companies that harness the power…
Generative AI’s Overnight Success Was Years in the Making
Since Georgian launched its Applied AI thesis in 2017, we’ve seen an…