The “tug of war between FUD and FOMO” in Australian businesses over generative AI has been giving way to reality.
Over the past year-and-a-half, Australian organisations have initiated thousands of GenAI workshops, technical development programs, pilot programs and software deployments.
Some of those activities have fizzled, others have become works-in-progress and successful deployments.
Among those with a front row seat to the Gen AI story are Amazon Web Services (AWS) partners Mantel Group, Slalom and Thoughtworks. Together, the three firms work have hundreds of AWS customers around the world.
CRN Australia spoke with the three firms in Sydney at the Amazon Web Services (AWS) for Software Companies AI Day in Sydney in late June, where we asked them how this story was playing out so far from their perspectives.

Kathryn Collier, Head of AI & ML Engineering, Mantel Group
CRN Australia: Have you moved many customers’ GenAI projects beyond proof of concept?
Mantel Group has seen a rapid acceleration in our Gen AI service offerings. In the past six months alone we’ve initiated a new Gen AI project for a customer every two weeks, three times as many as in the prior six months.
We are delivering a number of Gen AI initiatives into production for clients, the majority tend to be internal facing applications with a focus on operational efficiency - for example in the call centre to synthesise a lot of information quickly for call centre agents, or smart automated document processing.
CRN Australia: In your experience, are businesses wanting to wield AI responsibly?
Absolutely. Almost every client wants to ensure responsible AI practices. Where it breaks down in reality, is navigating the trade off against the time pressure to get results quickly in AI. Well-intentioned technical practitioners need business-wide buy-in to lay the foundations for responsible AI. It requires a holistic approach involving bias mitigation, security practices, explainability tools, responsible coding, and thorough documentation working together.
It's encouraging that we’re having these conversations now. Three years ago this conversation was limited to data scientists and PhD students, and the conversation with clients was more ‘Why am I not seeing the ROI from my data science team?’ Now it’s "How can we get Gen AI initiatives into production responsibly?”.
CRN Australia: Has there been a good response to these conversations about responsible AI?
Yes. Among our biggest clients it's a long game. There’s a business wide education required with roles beyond the tech team. It can't just be sitting on the shoulders of technical practitioners. I've seen a massive uptick in that understanding. The c-suite and board are no longer passive bystanders of this AI thing squirrelled away in a corner, they absolutely have to have some understanding of what AI is now. It’s put a big magnifying glass on the importance of AI, and importance of responsible AI. Leaders in AI five to ten years from now will be those that are focusing on a business wide approach to responsible practices and investing in their data now.
CRN Australia: How big is the team you work in?
Mantel Group has a core AI and ML engineering team of about 50, up from about a dozen three-and-a-half years ago when I joined, and we continue to grow.
Click below to read the rest of this story.

Chris Howard, Slalom Managing Director – Data, Analytics & AI
CRN Australia: What’s exciting you personally in the GenAI space?
Chris Howard: I’m excited about the how companies, such as AWS, are enabling and doubling down on the concepts of agents, allowing customers to break down bigger problems into tasks that can be solved serially. AI agents allow us to go to different sources and use a selection of tools to find a solution. While a large language model has a lot of value, it is not the only tool in your toolbox and so you can find your best option by asking yourself questions like, how do I chain things together? How do I use this concept of an agent to solve a piece of a problem and tackle more complex problems? The ability to expand our solutions with AI agents is something that I think we'll see continuing to grow and drive adoption.
It's a little bit like if you look at the way that the monolithic applications have evolved and have been broken down and re-architected or decomposed into more of a services-oriented approach with microservices. I see agents as almost fulfilling that same prophecy. We’re moving away from the uber prompt or the franken-prompt, where people are trying to encode as much as possible into a prompt. We should be specific about the prompt we’re writing and the model we're using and use the best prompt to deliver on an outcome, and chain solutions together to deliver on more complex outcomes.
CRN Australia: Gen AI offerings are evolving quickly. What are some implications?
Chris Howard: The generative AI space is evolving extremely quickly. Features that organisations thought they had to build – whether that was six, nine or 12 months ago – are now available as fully managed services. So, we've heard a lot [at the AWS For Software Companies AI Day in Sydney in late June] about RAG as a technique for augmenting large language models with enterprise data. Now we should think about how to combine traditional information retrieval techniques with large language models. That opportunity has become one of the go-to patterns that organisations across the world are adopting as far as how they exploit value from a generative AI.
It was great to see Amazon Bedrock launched locally, and to see some of the announcements around knowledge bases and a fully managed RAG capability natively within Bedrock. It comes back to this point of, what I thought I needed to build to be proficient in this space is now a fully managed service. And so, back to AWS and their core value proposition, which is all about removing that heavy lifting and allowing you to get on with doing your job rather than taking care of the plumbing. I think we'll continue to see features like that get rolled out.
Another point that really resonates with me with Amazon Bedrock is the optionality and model choice. Today's best models will not be tomorrow's best models - today's innovation will be tomorrow's relic. So, having that optionality is super critical. As organisations look to experiment and move forward into production, having that choice and flexibility is going to be something that they rely on a lot.
CRN Australia: What is the unstructured data opportunity?
Chris Howard: If you look at where most organisations have committed time and effort, you can see that it’s in figuring out how to make structured data more easily accessible, consumable and trusted across the organisation. Most organisations still run their business using this structured data in the shape of spreadsheets, relational databases and tables. They drive their business by building the right analytics and the right insights to make good business decisions, but it's predominantly from structured data.
But, the rub is that the world we live in and experience as humans is not structured. 80 percent of what we live is unstructured data – images, videos, acoustics, voice, text. So, one of the benefits that you get from a large language model is the ability to interpret those things, not just to create new content but also to understand content.
How can I listen to a call transcript and actually extract the real sentiment of the caller?
Think about a new device coming out in the mobile industry and a customer says ‘I love my new phone, it's got awesome battery and it’s powered all day long and I really love the interface, but the speaker sucks’. Well, how do you fully measure that sentiment? The feedback is somewhat nuanced and language models are rather good at determine this sentiment, allowing organisations to act on it.
How can I analyze imagery and create metadata, for example, so I can tag my images more easily and then start to find them? How do I do the same thing for a video? This capability of language models is presenting a whole new opportunity that allows us to better understand the 80 percent of data that is unstructured.
The analogy I often use is: we have five senses and what if you chose to only make decisions by using only one of them – thus only using twenty percent of the sensory data available to you? You’d probably make some pretty uninformed decisions as you smell your way into the room and stub your toe and bang your knee. We need to be able to interpret and use all of the data that is available to us, in order to make more informed decisions, and deliver on great outcomes.
Click below to read the rest of this story.
Andy Nolan, Thoughtworks Director of Emerging Technologies
CRN Australia: How can we get better results from LLMs?
Andy Nolan: The industry today grapples with a major challenge: how to gauge the performance of systems that leverage Language Learning Models (LLMs). These evaluations help us understand if the system is providing the answers we expect it to provide. While creating a proof-of-concept (POC) with a prompt that’s about 80 percent accurate is relatively easy, perfecting the final 20 percent is where things get tricky.
To address evaluations challenges, Thoughtworks recently acquired a company called Watchful. Watchful specialises in unstructured data labelling and does advanced research into LLM evaluation techniques. With countless input and output possibilities, the industry standard technique of unit testing style LLM evaluations isn’t going to be sufficient. Watchful’s research explores novel techniques to provide reliable evaluations of LLMs output, avoiding embarrassment when the rubber hits the road.
CRN Australia: Where have you seen the most adoption of Gen AI?
Andy Nolan: The banking, finance, and insurance sectors are leading the way in adopting generative AI. This might not make sense at first because generative AI models don't give predictable results, and these sectors are tightly regulated. However, these industries must offer and manage very complex products effectively, and that’s driving their accelerated adoption of generative AI.
Typically, businesses that sell complex products have high costs for customer service. For instance, when someone buys an insurance policy, the initial costs are low. However, when a claim is made, such as after a car accident, the call centre agent must assess and understand all the details of the policy in real-time while talking to customers. Ensuring that agents handle these situations accurately and reliably incurs significant cost for the insurance company.
Generative AI has the potential to significantly impact how agents manage and assist customers with complex products. In sectors like banking, loans, and insurance, where agents must navigate extensive terms and conditions, AI can enhance their performance by streamlining information retrieval, expediting call resolution, and reducing the need for follow-up communications—all critical metrics for call centers.
CRN Australia: How can success with Gen AI be measured?
Andy Nolan: For solutions where the processes and metrics are already well established, such as a generative AI solution that helps call centre agents answer customer queries, measuring success becomes fairly straightforward. However, for solutions where value is not objectively measured, determining metrics can be difficult. As an example, if a marketing department utilises generative AI to develop campaigns, it can be challenging to measure quantitatively the impact of the solution.
We recommend starting with solutions where the impact of generative AI is easy to measure and monitor, before moving on to more complex ones. By focusing on areas with established metrics, you can more easily quantify AI’s benefits. Measuring overall productivity improvements across an entire organization is more complex and challenging to quantify.
CRN Australia: What should organisations consider when it comes to using AI responsibly?
Andy Nolan: Responsible AI discussions often focus on fairness, bias, and training data, such as whether customers or employees are aware they are interacting with AI or reading AI-generated news. However, user experience is often overlooked. Whether the customer is an internal employee or an external user, it's crucial that they understand how they are interacting with AI. Responsible AI needs to be part of the user experience design, not an afterthought. It could be a call center scenario: Am I talking to a bot or am I talking to a human? It's really important to design systems where this transparency is clear to the user.
Viewing responsible AI through the lens of the user journey helps in understanding the broader implications. It ensures that developers are conscious of how they use data and the impact of their AI systems. This user-centric approach is crucial for maintaining ethical standards and fostering trust in AI technologies.
CRN Australia: Is there an opportunity for Thoughtworks to work with MSPs on Gen AI?
Andy Nolan: The managed service space represents a significant opportunity for leveraging AI to deliver more value and we have observed rapid growth in this area at Thoughtworks. Our role can vary from building and maintaining systems to modifying them so they can be effectively managed and monitored by AI. Our goal is to reduce our clients’ total cost of ownership while continuously enhancing the systems' reliability and adding new features over time.