
Chris Howard, Slalom Managing Director – Data, Analytics & AI
CRN Australia: What’s exciting you personally in the GenAI space?
Chris Howard: I’m excited about the how companies, such as AWS, are enabling and doubling down on the concepts of agents, allowing customers to break down bigger problems into tasks that can be solved serially. AI agents allow us to go to different sources and use a selection of tools to find a solution. While a large language model has a lot of value, it is not the only tool in your toolbox and so you can find your best option by asking yourself questions like, how do I chain things together? How do I use this concept of an agent to solve a piece of a problem and tackle more complex problems? The ability to expand our solutions with AI agents is something that I think we'll see continuing to grow and drive adoption.
It's a little bit like if you look at the way that the monolithic applications have evolved and have been broken down and re-architected or decomposed into more of a services-oriented approach with microservices. I see agents as almost fulfilling that same prophecy. We’re moving away from the uber prompt or the franken-prompt, where people are trying to encode as much as possible into a prompt. We should be specific about the prompt we’re writing and the model we're using and use the best prompt to deliver on an outcome, and chain solutions together to deliver on more complex outcomes.
CRN Australia: Gen AI offerings are evolving quickly. What are some implications?
Chris Howard: The generative AI space is evolving extremely quickly. Features that organisations thought they had to build – whether that was six, nine or 12 months ago – are now available as fully managed services. So, we've heard a lot [at the AWS For Software Companies AI Day in Sydney in late June] about RAG as a technique for augmenting large language models with enterprise data. Now we should think about how to combine traditional information retrieval techniques with large language models. That opportunity has become one of the go-to patterns that organisations across the world are adopting as far as how they exploit value from a generative AI.
It was great to see Amazon Bedrock launched locally, and to see some of the announcements around knowledge bases and a fully managed RAG capability natively within Bedrock. It comes back to this point of, what I thought I needed to build to be proficient in this space is now a fully managed service. And so, back to AWS and their core value proposition, which is all about removing that heavy lifting and allowing you to get on with doing your job rather than taking care of the plumbing. I think we'll continue to see features like that get rolled out.
Another point that really resonates with me with Amazon Bedrock is the optionality and model choice. Today's best models will not be tomorrow's best models - today's innovation will be tomorrow's relic. So, having that optionality is super critical. As organisations look to experiment and move forward into production, having that choice and flexibility is going to be something that they rely on a lot.
CRN Australia: What is the unstructured data opportunity?
Chris Howard: If you look at where most organisations have committed time and effort, you can see that it’s in figuring out how to make structured data more easily accessible, consumable and trusted across the organisation. Most organisations still run their business using this structured data in the shape of spreadsheets, relational databases and tables. They drive their business by building the right analytics and the right insights to make good business decisions, but it's predominantly from structured data.
But, the rub is that the world we live in and experience as humans is not structured. 80 percent of what we live is unstructured data – images, videos, acoustics, voice, text. So, one of the benefits that you get from a large language model is the ability to interpret those things, not just to create new content but also to understand content.
How can I listen to a call transcript and actually extract the real sentiment of the caller?
Think about a new device coming out in the mobile industry and a customer says ‘I love my new phone, it's got awesome battery and it’s powered all day long and I really love the interface, but the speaker sucks’. Well, how do you fully measure that sentiment? The feedback is somewhat nuanced and language models are rather good at determine this sentiment, allowing organisations to act on it.
How can I analyze imagery and create metadata, for example, so I can tag my images more easily and then start to find them? How do I do the same thing for a video? This capability of language models is presenting a whole new opportunity that allows us to better understand the 80 percent of data that is unstructured.
The analogy I often use is: we have five senses and what if you chose to only make decisions by using only one of them – thus only using twenty percent of the sensory data available to you? You’d probably make some pretty uninformed decisions as you smell your way into the room and stub your toe and bang your knee. We need to be able to interpret and use all of the data that is available to us, in order to make more informed decisions, and deliver on great outcomes.
Click below to read the rest of this story.