Australia's Mantel Group talks up success with GitHub Copilot

By on
Australia's Mantel Group talks up success with GitHub Copilot
Adam Durbin, Mantel Group.

Australia’s Mantel Group is talking-up its use of GitHub Copilot after shaving 100 days off API development for an unnamed bank, speeding up delivery of the project by a third.

Mantel Group has one of the largest cohorts of AI and machine learning specialists in Australia and currently has the most certified GitHub Copilot practitioners in ANZ.

The company shared little detail about the bank project, but is using it as a proof point as it touts its use of AI tools to speed up software development.

“What we are seeing today is just the tip of the iceberg,” Mantel Group CTO Adam Durbin said.

It is applying Gen-AI tools across its software development lifecycle, including using GitHub Copilot to generate tests cases for application code, to examine legacy code and suggest efficient ways to update it, and to create documentation for code and projects.

It is also “seeing that end-to-end software delivery lifecycle really be accelerated through agentic AI and the use of agentic solutions, versus a single tool, like a Copilot,” Durbin said.

For example, AI agents can integrate with a company’s security tools to learn about the vulnerabilities that need to be remediated, then generate the code, review it, uplift it, and then send it for review and ultimately into production, Durbin said.

It’s not hard to find complaints by developers about the quality of AI-generated software code. Asked about this, Durbin replied by saying that a “typical coding tool is only going to be able to take a general context and generate general code.”

“It's really powerful at things like code completion, scaffolding, starting boilerplate code, but where the complaints start coming is where you want to be more specific. You need to understand the company's context. You need to understand the challenges, the reasons and business requirements behind the code being generated.”

“That's where supplementing these tools with agentic solutions really comes into its own.”

Augmenting the behaviour of agents with “deterministic tools” can also “get more deterministic outcomes,” said Mantel Group’s generative AI engineering acceleration lead, Sam McLeod.

We also asked about the cost of testing AI-generated code – a concern we’ve heard from a firm specialising in software testing.

“Whether it's AI generated code or good human generated code, there shouldn't be a difference in software testing,” Durbin argued.

“Ultimately, we’re generating code that should meet the business requirements, and that's where AI comes in. Generating good code shouldn't have any difference in the level of testing.”

Mantel recently finished a project with an Australian bank for which it generated over 95% of unit test coverage through AI.

“That takes away a significant portion of humanly rated unit tests, so we start to be able to not only generate the code, but test the code using AI as well,” Durbin said.

The company is also looking at “other personas in the software delivery lifecycle”.

“If you think about a typical software delivery project, you've got business analysts, architects, security engineers, developers - all of these roles are capabilities that can be supplemented through agentic AI,” Durbin said.

“It is a fast-moving space… but what we are seeing is the stabilisation of the benefits. We are starting to be able to prove that we can get good quality generated code consistently at scale using these sort of techniques.”

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © nextmedia Pty Ltd. All rights reserved.
Tags:

Log in

Email:
Password:
  |  Forgot your password?