Introduction

About Sigma Labs

Not heard of us before? Here's a quick overview.

We - Sigma Labs - are a technology consulting firm specialising in high-performance talent solutions. We bridge the gap between outstanding graduates and forward-thinking organisations through our comprehensive approach to talent development.

What We Do

Our work focuses on three core pillars:

  • Hiring: Recruiting exceptional graduates with the potential to become world-class technologists
  • Training: Delivering industry-leading professional development programs that transform raw talent into skilled practitioners
  • Placement: Connecting our clients with highly skilled software, data, and cloud consultants who can deliver immediate value

Key Achievements

  • Consultant Success: 83% of our clients rate our consultants in the top quartile of all people in their team. 94% of trainees who have finished our programme have successfully secured roles in the tech industry.
  • Certified B Corporation: Sigma Labs holds B Corp certification, demonstrating our commitment to balancing purpose and profit while meeting rigorous standards of social and environmental performance, accountability, and transparency
  • Innovative Training Programmes: We continuously evolve our training methodologies to stay ahead of industry trends, with this AI-Native consultant programme representing our latest innovation in talent development

Our mission is to amplify talent through exceptional training, enabling both our consultants and clients to thrive in an increasingly complex technology landscape.

Between August and November 2025, Sigma Labs trained it's first cohort of AI-native consultants. This report summaries the outcomes of that training, and the lessons learned.

We defined "AI-Native Consultants" as professionals who have been trained from the outset to leverage large language models (LLMs) and related AI technologies as integral tools in their consulting practice. This contrasts with traditional consultants who may adopt AI tools later in their careers.

Our hypothesis was that AI-Native Consultants would demonstrate superior performance in project delivery, problem-solving, and client outcomes compared to their traditionally trained peers. We actively worked against the risks of AI over-reliance by embedding several techniques to ensure critical thinking and domain expertise remained central to our training.

Executive Summary

By leveraging the power of LLMs, our AI-Native Course trainees demonstrated a 21% increase in project completion speed or complexity

This was achieved whilst maintaining the same level of quality of outcomes, and without a decrease in knowledge retention

Methodology

We conducted an AB testing style study over a 15-week period which involved 18 trainees split into two groups of 9. One group completed our Traditional Course, while the other completed our AI-Native Course.

Both groups were assessed on project completion speed, quality of outcomes, and knowledge retention through standardised tests and project evaluations.

Background

Key Points
  • GitHub Copilot (2021) marked the shift from autocomplete to true AI pair-programming
  • 76% of developers using or planning to use AI tools (Stack Overflow 2024), up from 70% in 2023
  • Fast-moving startups report 80%+ of codebase written by AI; Big Tech seeing revolutionary change

The release of GitHub Copilot in mid-2021 marked a watershed moment for software development: for the first time, a large-scale model trained on public repositories could sit alongside a developer in their IDE and suggest entire functions, boilerplate, or even complex algorithms in real time. Powered by OpenAI’s Codex, Copilot demonstrated that AI could move beyond simple autocomplete and into a true “pair-programming” paradigm. Early adopters praised its ability to reduce repetitive coding tasks, speed up prototyping, and introduce best-practice patterns.

In the years since, an ecosystem of AI co-pilot tools has emerged. Cursor, Tabnine, Amazon CodeWhisperer, and enterprise LLM integrations now vie for developers’ attention, each bringing its own strengths - whether tighter on-prem security, better support for specialised languages, or deeper integration with cloud pipelines. Meanwhile, chat-based assistants like ChatGPT and Anthropic’s Claude offer natural-language debugging, architectural guidance, and even test-generation features. As these co-pilots learn from private codebases and context, they’re shifting from generic suggestion engines to customisable teammates that understand a team’s coding conventions, security policies, and domain-specific libraries.

On the horizon, we see new ‘Agentic’ tools that promise the ability to replace junior or mid-level technologists completely by empowering the AI to complete multi-step actions encompassing the full software development lifecycle. Development is progressing at pace with tools such as Devin.ai , but a proven track record to produce industry-standard code on large existing codebases is yet to be seen.

The hypothesis that this document is predicated on is that AI tools will continue to become more powerful and companies will realise that without the performance gained from using it, they won’t be able to keep up with the speed at which their competitors can ship software.

Industry Research

Within start-ups we’ve seen a much larger uptake, with one quoting that they believe 80%+ of their codebase has been written by AI. Most notably, this quote shows the dramatic uptake that has been.

“There is code that has been written, tested, deployed and will - eventually - be removed without ever having had a human see it”

This reinforces the sense from industry that the current suite of tools is most useful for greenfield projects, with limited scope, where moving fast is preferable to solving problems comprehensively.

This rapid uptake by very early-stage start-ups can be explained by the forcing effect that limited resources have on small companies. Poor quality code is unlikely to force them to close; not having a product-market fit will.

More widely in the industry, the Stack Overflow Developer Survey 2024 stated that

76% of all respondents are using or are planning to use AI tools in their development process this year, an increase from last year (70%). Many more developers are currently using AI tools this year, too (62% vs. 44%).

In the world of Big Tech, adoption within teams building the tools (i.e. the most obvious first adopters) is surprisingly high. Tim Rogers (Product Owner at Github) shared their data on HackerNews

So far, the agent has been used by about 400 GitHub employees in more than 300 our our repositories, and we've merged almost 1,000 pull requests contributed by Copilot. [..] In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent [...]

To summarise, adoption is accelerating rapidly across all business

  • Fast-moving start-ups have seen revolutionary change in their processes
  • Big tech at the bleeding edge has also seen the value

Hypothesis

Key Points
  • AI shifts bottleneck from code writing to requirements gathering and quality assurance (V-Bounce model)
  • AI-native technologists must excel at prompting, stakeholder management, and quality assurance
  • Companies will hire junior technologists to work at new bottleneck: requirements and testing

In the research paper titled 'The AI-Native Software Development Lifecycle: A Theoretical and Practical New Methodology', the authors speak about a radical change in the places that take up the most time when producing software. This relies on the ‘V-Model’ of software development (see Figure 1 below), which tries to describe the way software is built.

In this model, cheap, junior technologists are brought in to tackle the largest bottleneck in this system (represented by the largest box) - writing the code. This is not technically the hardest part of the problem, and if the problem has been well defined and structured, this should be achievable for juniors to complete with support.

However, with the use of AI, this model would be transformed to look more like this - the ‘V-Bounce’ model (see Figure 2 below). In this new model, it is apparent that the bottleneck in this system is no longer writing code - which is mostly automated - but rather in the requirements gathering and testing phases of the project. We hypothesise that one outcome of this is that companies will now be hiring cheap, junior technologists to work at the new bottleneck of production - requirements gathering and quality assurance.

V-Model Diagram

Figure 1: V-Model of Software Development

V-Bounce Diagram

Figure 2: V-Bounce Model of Software Development, post AI

There are, of course, many other bottlenecks in a software project - architecture, deployment and cross-team coordination - that aren't solely code-generation issues. But these will often fall outside a junior engineer's remit, sitting with an Architect or Tech Leads to work through.

Our hypothesis is that junior technologist will now need to be highly skilled in these three areas to be effective in this new world:

1. Use AI to generate code to solve problems and maintain productivity

  • Write a concise prompt that specifies the programming language, input/output formats, and any edge-case requirements.
  • Provide relevant context (existing code, data schemas, dependencies) so the AI generates code that fits the project.
  • Compare multiple AI outputs, choose the best one, and refine the prompt to address missing features or bugs.
  • Review every line of AI-generated code for correctness, performance, and security before merging.

2. Work with stakeholders to ensure they're solving the right problem

  • Schedule and lead meetings to capture business goals, constraints, and success metrics.
  • Turn stakeholder input into concise user stories, acceptance criteria, and manual decomposition before prompting AI.
  • Provide regular, transparent updates on progress, highlighting both AI-generated work and manual changes.
  • Document edge-cases, data-privacy needs, and potential AI hallucinations so risks are visible from day one.

3. Ensure high quality output through comprehensive user testing and validation

  • Define clear test scenarios (unit, integration, end-to-end) before asking AI to scaffold tests.
  • Prompt AI tools to generate initial test cases, then refine or extend those tests to cover edge cases.
  • Automate QA checks in CI/CD pipelines so every commit undergoes the same rigorous validation.
  • Track and report test coverage, code quality metrics, and security findings to the team.

Risks

Key Points
  • There are significant risks to both consultants and clients, if AI is over-relied upon
  • Mitigation requires rigorous training, automated guardrails, and strict security practices

In this section, we highlight the risks if AI-Native Technologists were to outsource the majority of the writing of their code to an AI Copilot. These risks are split into:

  • Consultant Risk: Short or long-term performance risk to our Consultants
  • Client Risk: Security, financial or reputational risk to our Clients
Consultant Risk: Skill Atrophy

Over-reliance on AI erodes fundamental coding skills and problem-solving abilities

Relying too heavily on AI for code generation can erode a developer's fundamental skills, like breaking problems down, writing algorithms by hand and debugging complex issues.

Over time, they may lose confidence in solving novel challenges without an AI crutch and fail to build the deep understanding needed for robust, maintainable software. This "skill atrophy" may undermine their long-term growth and adaptability as technologists.

Sources:

Consultant Risk: Failure to Attain Mastery of Tools

Superficial knowledge prevents deep understanding and troubleshooting ability

Failure to attain true mastery of AI tools means trainees may develop only a "veneer of knowledge" - the illusion of skill without the deep understanding that comes from hours of trial, error, and deliberate practice. Without wrestling through tedious, hands-on challenges, they risk knowing which buttons to click but not why or how the underlying systems work.

This superficial familiarity leaves them unable to diagnose failures, optimise workflows, or innovate when AI suggestions fall short. It is almost certain that with the current state of AI tools, the AI will assist in 90% of the task, but the final, hard 10% will still require a technologist's direct involvement in the code.

Consultant Risk: Code Comprehension & Critical Review

Reduced ability to understand, assess, and improve AI-generated code

Code comprehension and critical review refer to the ability to deeply understand, assess, and improve any code, even when it is generated by AI. Without having to write all of the code themselves, trainees may not truly understand what each part does or why it was written in a particular way.

They might also take the AI's word on what high-quality code is without building the skills to judge this for themselves. This can lead to hidden bugs, design flaws, and missed learning opportunities, as technologists fail to question, critique, or improve the AI's suggestions, ultimately weakening both individual confidence and team code quality.

Client Risk: Security Implications & Business Risk

AI-generated vulnerabilities expose organisations to breaches and compliance failures

AI-generated code can unknowingly introduce security vulnerabilities, such as un-sanitised inputs, weak authentication, or misconfigured permissions, that the model itself cannot detect. If these flaws reach production, they leave organisations exposed to data breaches, service outages, and regulatory non-compliance. The resulting incidents damage reputation, erode customer trust, and can incur significant financial penalties.

Without rigorous security reviews and automated vulnerability scanning of every AI-suggested snippet, businesses risk embedding dangerous weaknesses into their systems. For example:

  • Generating infrastructure scripts that open network ports broadly or disable firewalls by default – exposes internal services to the internet, allowing attackers to probe and breach systems.
  • Disabling TLS certificate validation in HTTP clients – leaves data in transit open to interception or tampering, enabling undetected man-in-the-middle attacks.
  • Failing to sanitise user-supplied file paths – permits directory traversal or arbitrary file reads, risking exposure of sensitive files and potential remote code execution.
Client Risk: Licence & IP, or Data Exposure

Restrictive licences and PII leakage create legal and compliance risks

Generative models trained on billions of lines of public code can inadvertently reproduce snippets governed by restrictive licences or reveal patterns that mirror proprietary implementations. Trainees who accept AI-generated code verbatim risk tainting their organisation's codebase with third-party licence obligations or exposing internal logic and secrets to external systems.

Additionally, most AI systems require users to send entire code blocks to them in order to be expanded upon. If a technologist was to - meaningfully or otherwise - send Personally Identifiable Information (PII) to a provider of AI it could break EU/US law (e.g. GDPR).

Examples:

  • Embedding a block of AI-generated code under the GPL into a closed-source microservice—thereby forcing the entire service to adopt a copyleft licence.
  • Incorporating an MIT-licensed helper function without attribution or a licence header, creating confusion over permitted reuse.
  • Pasting a snippet containing a customer's full name, date of birth and transaction history into ChatGPT to clarify a formatting error, thereby transmitting sensitive PII to the OpenAI service without the user's consent or organisational approval.

Methodology

Our approach to training AI-Native Consultants

With the skill set of a junior technologist shifting from code writing to requirements gathering and quality assurance, our training methodology needed to adapt accordingly. We developed a new AI-Native training programme that emphasised these skills while still ensuring strong foundational coding abilities.

We understood that the core challenge with training AI-Native was that of over-reliance on AI tools. If trainees simply used AI to generate code for them, they would not develop the critical thinking and problem-solving skills needed to be effective technologists. To address this, we embedded several techniques into our training programme to ensure that trainees remained active participants in the development process.

1. End-to-End Project-Based Learning

Full project lifecycle training from requirements gathering to production support

In our Traditional course, we provide trainees with a fully scoped-out case study in which all requirements and discovery have already been completed. Each cohort works through a sequence of well-defined tasks - from data ingestion to transformation to basic front-end display- chunked into step-by-step challenges. This model ensures focus on coding and tool usage but omits the critical early phases of stakeholder engagement, requirements gathering and solution design. The reason we do this is that this is mostly where they'd be spending their time as Junior Technologists - writing code.

In this new 'End-to-End' style, trainees also

  • Conduct Requirement Gathering from clients directly. ('Clients' are role played by Coaches)
  • Run check-ins with clients to get feedback
  • Design complete architecture solutions (where they would previously have been given a solution)
  • Compare and contrast architecture solutions to decide on the best solution
  • Ensure all requirements have been met during project sign-off with a client

What this means in practice is that all Consultants can work fully across the project, from requirements gathering through to supporting in production

2. Prompt → Explain → Apply

Active learning loop ensuring deep understanding of AI-generated code

In our Traditional stream, the learning to code loop might look like "Understand-Build-Test". First, the trainee has to understand the problem, break it down into steps, break that down into actionable pieces of code, build it, and then test what they've built. Through this process, they will (hopefully) build a deep understanding of the tools they're using by exploring the capabilities of the tools. They will iterate through maybe wrong or wrong-ish solutions before ultimately landing on one that works, often through a process of trial and error.

The approach is less effective when the AI may generate an exact solution for them first time. Unless we're careful, they will never be exposed to all the bad solutions before they end up at the good one - they'll just be given the good one.

The "Prompt → Explain → Apply" loop keeps trainees active technologists rather than passive consumers of AI output by enforcing three tight phases on every task

What this means in practice is that all Consultants fully understand what they're doing, even when LLMs are doing the work for them

3. Project Defence & Presentations

Weekly presentations to demonstrate true understanding of AI-assisted work

In an AI-Native world, having completed the task is no longer a good judge of understanding. The AI may have done the work for you, after all. Instead, we need to ensure that trainees can demonstrate deep understanding of the work they've done through a process of project defence and presentations.

At least once a week, every consultant is asked to present their project and their code in front of a Senior Coach.

The result of this is

  • Guarantees Comprehension: You can't fake your way through a live Q&A - defenders must truly understand every line.
  • Reinforces Critical Reading: Critics learn to combine AI suggestions with human judgement, practising both tool-use and code review skills.
  • Cultivates Accountability: The whole team sees when AI misses something or a trainee misunderstands their own code, driving continuous improvement.
4. AI Best Practice

Security, compliance, and data privacy guardrails for AI-generated code

Key Points
  • Secure-by-Design Coding: Threat modeling, vulnerability scanning, and AI pitfall checklists
  • Licence & IP Compliance: Automated scanning to prevent restrictive licence violations
  • Data Privacy & PII Protection: Policies and tools to prevent sensitive data leaks to public LLMs

An "AI Best Practice" module equips engineers with the principles, patterns and automated guardrails needed to safely harness generative code tools—specifically addressing the security, licence/IP and data-privacy risks they introduce.

  • Secure-by-Design Coding
    • Teach secure prompt patterns (e.g. default to input sanitisation) and threat-model AI outputs before merging.
    • Lab: Use static-analysis and vulnerability scanners as CI gates on every AI-generated snippet.
    • Provide a checklist of common AI pitfalls (open ports, disabled TLS checks, unsanitised file paths) and remediation snippets.
  • Licence & IP Compliance
    • Explain the difference between permissive and copyleft licences and the dangers of accepting GPL-style code verbatim.
    • Automate licence scanning in CI (FOSSA, SPDX checks) to flag any third-party snippets that carry restrictive terms.
    • Supply templates for standard attribution headers and a "safe-prompt" library that avoids licence-bound examples.
  • Data Privacy & PII Protection
    • Enforce policies against sending sensitive data or proprietary code to public LLMs.
    • Demonstrate how to scrub or anonymise inputs (redacting names, IDs) and use enterprise-grade, on-prem or private-endpoint models.
    • Lab: Simulate a GDPR breach by pasting PII into a public API and then apply scripts to detect & remove leaks.

By combining hands-on exercises with automated CI-gated checks, ready-made checklists and real-world examples, this module ensures trainees can confidently generate, vet and deploy AI-assisted code without exposing their organisation to security, legal or compliance failures.

Tools & Technology Selection

Throughout our AI-Native training programme, we conducted extensive experimentation across multiple categories of AI tools to identify the optimal technology stack for our consultants. Our evaluation focused on real-world performance, learning curve, and suitability for professional consulting environments.

Chat-Based AI Assistants

ChatGPT, Gemini, and Claude compared for technical problem-solving

We considered: ChatGPT, Gemini, Claude

We chose: Claude - Demonstrated the best ability when working with code, provided more structured and actionable responses, and excelled at explaining trade-offs in architectural decisions. Particularly effective when working in Learning Mode.

ChatGPT offered strong general-purpose capabilities, and was preferable for inter-personal interactions around AI. Gemini provided good integration with the Google ecosystem but showed inconsistent performance on complex technical queries and architectural guidance, however we brought this in later in the programme and experimented with it less.

Generative Coding Tools

GitHub Copilot and Cursor evaluated for in-IDE assistance

We considered: GitHub Copilot, Cursor

We chose: GitHub Copilot - Offered the most seamless IDE integration, particularly with VS Code. Excellent context awareness and reliable completions across python. It seemed to respond well to the `copilot-instructions.md` file and we could prompt it when when using auto-complete.

Cursor provided innovative features and powerful multi-file editing capabilities, but had a steeper learning curve, higher barrier to entry and didn't integrate as well with the rest of our stack.

Agentic AI Development Tools

Claude Code, Copilot Workspace, and Codex for autonomous development

We considered: GitHub Copilot (Workspace), OpenAI Codex, Claude Code

We chose: Claude Code - Significantly outperformed alternatives with exceptional ability to:

  • Understand complex project requirements and break them into actionable steps
  • Navigate large codebases and maintain context across multiple files
  • Execute full development lifecycles including implementation, testing, and debugging
  • Provide clear explanations of its reasoning and approach
  • Handle edge cases and error scenarios with minimal intervention

GitHub Copilot (Workspace) offered extended capabilities beyond code completion, but with limited autonomous action and still requiring significant developer guidance for complex tasks. OpenAI Codex provided powerful code generation from natural language, but was less integrated into full development workflows with limited ability to handle multi-step, context-dependent tasks.

The combination of Claude Code for agentic development tasks with GitHub Copilot for in-editor assistance proved highly effective. Claude Code's superior reasoning and multi-step execution capabilities made it indispensable for complex project work, whilst Copilot provided reliable day-to-day coding assistance.

Security & Code Quality Tools

Snyk for vulnerability scanning and licence compliance

We considered: Snyk, SonarQube, Checkmarx, GitHub Copilot in Pull Requests

We chose: Snyk - Deployed for comprehensive security analysis of AI-generated code. Provides:

  • Real-time vulnerability scanning for dependencies and code
  • Licence compliance checking to prevent IP violations
  • Integration with CI/CD pipelines for automated security gates
  • Clear remediation guidance for identified vulnerabilities

Additionally, we used GitHub Copilot in Pull Requests to provide AI-assisted code review suggestions, helping to catch potential issues and improve code quality during the review process.

This combination of tools created a robust, secure, and highly productive development environment. By carefully selecting best-in-class tools for each category, we enabled our AI-Native consultants to work at significantly higher velocity whilst maintaining code quality and security standards.

Outcomes

Impact and analysis of the AI-Native training programme

Key Points
  • AI-Native consultants completed coursework 19% faster (29.2 vs 36.1 hours) while maintaining high understanding scores
  • Group projects showed significant productivity multiplier: AI-Native consultants delivered more complex, feature-rich applications in the same timeframe
  • Consistently higher interview performance across all formats, with 8.1/10 satisfaction and +33% self-reported productivity gains

Impact on Productivity

Independent Learning

To assess the impact of AI-Native training on independent learning speed, we compared the time taken by both traditional and AI-enabled training groups to the coursework of our programme. The modules covered new technologies and concepts not previously encountered by either group. Additionally, the AI-enabled group were given less details for the projects to complete, requiring them to research more independently.

What we found, was that by the time that trainees were fully ramped in terms of AI tooling (i.e. using generative and agentic coding and LLMs regularly) they were able to complete the coursework to same standard, to a high level of understanding, in 19% less time on average.

Completion Time for AI-Natives
-19%
29.2 hours vs 36.1 hours average time to complete coursework
Average Score in Project Understanding
8.4/10
Assessed by Senior Coaches during Project Defence sessions
Average Score in Code Defence
7.2/10
Assessed by Senior Coaches across 'Justification', 'Correctness' and 'Depth' of answer
Group Projects

To measure the impact of AI-Native training on productivity, we compared the Group Project outputs between traditional and AI-enabled training groups. Both groups worked on the same project briefs with the same complexity and timeframes. The visual comparison below demonstrates the significant difference in output and complexity achieved by AI-enabled consultants.

A visual comparison demonstrates that AI-Native consultants not only completed more projects within the same timeframe but also tackled projects of greater complexity and scope. This represents a significant productivity multiplier while maintaining code quality and architectural standards.

Traditional Training Projects

Projects completed by consultants using traditional training methods

Traditional Training - Group 1 Project

Traditional Group 1 (taken from Project Presentation)

Traditional Training - Group 2 Project

Traditional Group 2

AI-Native Training Projects

Projects completed by consultants using AI-Native training methods

AI-Native Training - Group 1 Project 1

AI-Native Group 1 (taken from Project Presentation)

AI-Native Training - Group 2 Project 1

AI-Native Group 2

Below you will find two graphs that visualise lines written and cytometric complexity. Whilst independently, the two graphs below don't give important insights, it is worth noting that AI-Enabled trainees have managed to write 6.5x the code, whilst also maintaining a low complexity score.

Total lines of code delivered by each group. AI teams produced significantly more code volume, with Group 2 delivering 6.5x more lines than their traditional counterpart.

Average cyclomatic complexity per function. AI teams maintained similar or slightly higher complexity levels, indicating more sophisticated logic implementation.

Consultant Performance & Quality

To evaluate the effectiveness of our AI-Native training approach, we conducted a series of standardised interview assessments comparing AI-Enabled consultants against those trained using traditional methods. The results demonstrate consistently higher performance across all interview formats.

Consultant Feedback & Self-Reflection

We collected feedback from consultants who completed the AI-Enabled training programme. Their responses provide valuable insights into the effectiveness of the approach and areas for improvement.

Overall Satisfaction
8.1/10
Average happiness rating with 75% scoring 8+
Productivity Gain
+33%
Self-reported average performance impact
Learning Effect
+21%
Self-reported average impact on learning
What Consultants Loved
"I liked the fact that I was able to use advanced AI's in a professional setting and really scale up the level and quality of my work"
"Significantly speeds up gaining familiarity and general mastery over new technology tools, more efficient than reviewing over docs and simplifies learning through summary"
"I liked most the deeper understanding gained from code defence, I feel this took me beyond simply doing to understanding AWS more deeply which is so valuable in industry"

Key positive themes that emerged:

  • Immediate Support: Consultants valued getting instant explanations and debugging help, accelerating their workflow
  • Tool Exposure: Exposure to diverse AI tools (Claude's generative and agentic versions, Copilot) expanded their technical toolkit
  • Deeper Understanding: Code defences forced deep comprehension, taking consultants beyond surface-level completion
  • Learning Efficiency: AI as a learning tool proved highly effective, with 3 consultants reporting 50-70% learning gains
Areas for Improvement

Consultants also identified several areas where the programme could be enhanced:

  • Agentic AI Caution: Concerns about over-reliance on agentic AI tools (like Claude Code), with recommendations to introduce them later in training or use them more sparingly
  • Assessment & Feedback: Need for clearer assessment criteria and more consistent feedback, particularly around gauging true understanding vs AI-generated work
Key Insight

While consultants overwhelmingly appreciated the productivity and learning benefits of AI tools, they expressed thoughtful concerns about maintaining deep understanding and avoiding over-reliance. This validates our emphasis on code defences and structured learning phases, while highlighting opportunities to refine the timing and intensity of assessments.

Reflections & Conclusion

Insights from our journey so far

Firstly, it is important to acknowledge that all of the data collected in this report is from a relatively small sample size. Whilst the results are promising, further study with larger cohorts and over longer timeframes will be needed to validate these findings. We're excited to continue refining our approach and measuring outcomes as we scale our AI-Native training programme.

The shift to AI-Native technologists represents a fundamental change in how we train and deploy junior engineers. By focusing on requirements gathering, stakeholder management, and quality assurance, we prepare consultants for the new bottlenecks in software development.

Our AI-Native training programme has demonstrated significant gains in productivity and quality, with consultants completing work faster while maintaining high standards. The emphasis on active learning, code defences, and AI best practices ensures that consultants develop deep understanding rather than superficial reliance on AI tools.

As AI continues to evolve, so too must our training methodologies. We are committed to refining our approach based on feedback and outcomes, ensuring that our consultants remain at the forefront of this transformative era in technology.

Interested?

Want to learn more about our AI-Native Consultants or hire our services?

Get in touch at clients@sigmalabs.co.uk