In my previous newsletter, I talked about the resurgence of citizen development through Low-Code/No-Code (LCNC) platforms in oil & gas. Today, I want to explore what happens when we add Generative AI to that equation—because this combination is either going to revolutionize how our industry builds solutions, or create the most sophisticated security nightmare we’ve ever seen.
The rise of GenAI solutions in the workplace can further enhance the LCNC solutions that citizen developers can create. To do this, though, citizen developers must become more familiar with the data used to train the AI so that limitations and biases are well understood. Otherwise, it will be very difficult to determine whether tool output contains AI hallucinations.
The Numbers That Should Get Your Attention
The convergence of GenAI and LCNC isn’t just hype—it’s becoming the dominant force in enterprise application development. Gartner predicts that by 2025, 70% of new enterprise applications will use LCNC technologies, while the combined market is projected to reach $50 billion by 2028.
More importantly for operational efficiency, early adopters are reporting productivity gains up to 45% and cost reductions of 40% when combining AI with citizen development platforms.
But here’s the sobering reality check: security researchers have found that 40% of AI-generated code contains security vulnerabilities, and 19.7% of AI-generated package dependencies are completely “hallucinated”—meaning they don’t actually exist but the AI thinks they do.
What’s Actually Happening in the Platforms
The major LCNC platforms have moved far beyond simple AI assistance. Microsoft Power Platform’s latest releases include hundreds of AI-enhanced features, with 80% of developers reporting faster development and 65% reporting higher job satisfaction when using Copilot integration.
Salesforce’s Agentforce platform represents the evolution from assistive to autonomous AI, with Salesforce reporting that 30-50% of internal work is now performed by AI. OutSystems has released their AI Agent Builder for general availability, while ServiceNow was ranked #1 for Building and Managing AI Agents in 2025 Gartner Critical Capabilities.
These aren’t incremental improvements—they’re autonomous agents capable of end-to-end application development with minimal human intervention.
The Oil & Gas Reality Check
Our industry is particularly well-positioned to benefit from this convergence, but we’re also uniquely vulnerable to its risks. Oil and gas companies lead industrial GenAI adoption, with 62% already implementing or planning GenAI solutions.
Shell’s enterprise AI implementation exemplifies the transformation potential, scaling AI predictive maintenance to 10,000+ pieces of equipment and achieving additional 1-2% LNG production optimization. They’ve reduced CO2 emissions by ~355 tonnes/day while generating $35M+ cost savings through 75+ DIY applications developed by citizen developers.
Chevron pioneered regulatory innovation, becoming the first company permitted by FAA to use drones in shared airspace for pipeline monitoring, while BP’s Digital Twin applications achieved 30,000 additional barrels of oil production in the first year.
The Security Elephant in the Room
Here’s where I need to be blunt about the risks. The security research coming out around AI-generated code is genuinely alarming. A comprehensive study analyzing 576,000 code samples found that 19.7% of package dependencies were “hallucinated”, creating severe security risks through what researchers call “slopsquatting” attacks.
Joseph Spracklen, Lead Researcher, explains the attack vector: “Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users.” Critically, 43% of hallucinations repeated across multiple queries, making them predictable targets for attackers.
Multiple studies confirm concerning vulnerability rates: Stanford research shows 40% of AI-generated code contains security vulnerabilities, while the FormAI Project found 51.24% of AI-generated C programs contained at least one vulnerability.
Jeff Williams, CTO at Contrast Security, specifically addresses our citizen development reality: “Citizen developers are more likely to make inadvertent mistakes that could lead to security issues. I would expect citizen developers will make a lot of basic mistakes such as hardcoded and exposed credentials, missing authentication and authorization checks, disclosure of PII.”
The Data Transparency Imperative
This brings me back to my core point: citizen developers must become more familiar with the data used to train the AI so that limitations and biases are well understood. Without this understanding, it becomes nearly impossible to determine whether tool output contains AI hallucinations.
MIT research emphasizes that “AI hallucinations occur when generative AI models create information that appears credible but is actually false or misleading”. The issue is so concerning that the World Economic Forum calls transparency “step one in alignment on GenAI best practices”.
The regulatory landscape is responding rapidly. California’s AB 2013, effective January 1, 2026, requires comprehensive documentation of GenAI training data, while the EU AI Act establishes comprehensive transparency obligations for high-risk AI systems. Biden’s Executive Order 14110 requires federal agencies to inventory AI use cases publicly.
The Governance Framework You Need Now
McKinsey’s research shows that CEO oversight of AI governance is one of the elements most correlated with higher bottom-line impact from GenAI use. However, only 27% of organizations review all GenAI content before use, highlighting a massive governance implementation gap.
Here’s what successful organizations are implementing:
Immediate Actions (0-6 months):
- Establish AI governance committees with C-suite sponsorship
- Conduct comprehensive AI inventories across business functions
- Implement basic data loss prevention policies for citizen development platforms
- Begin mandatory training program development for AI literacy
Medium-term Initiatives (6-18 months):
- Deploy comprehensive risk management frameworks aligned with NIST AI Risk Management Framework
- Implement data lineage tools for AI training data transparency
- Establish certification programs for citizen developers
- Create audit and compliance monitoring systems
Long-term Strategic Goals (18+ months):
- Achieve ISO 42001 certification for AI management systems
- Establish centers of excellence for AI governance and citizen development
- Implement advanced monitoring and alerting for AI risk management
- Develop predictive risk assessment capabilities
The Counterargument Worth Considering
I’d be remiss if I didn’t acknowledge the pushback from some industry experts. Jon Kennedy, Senior VP of Engineering at Quickbase, argues: “In the future, everyone will be generating software, but they just won’t realize that’s what they’re doing. If you know how to ask the right questions of a copilot, you can have it quickly build an app or deploy a solution.”
GitHub research shows 88% of developers reporting improved productivity and 74% focusing on more satisfying work when using AI assistance. Forrester predicts TuringBots will improve software development lifecycle productivity by 15-20% overall.
The productivity gains are real and substantial. The question is whether we can capture them without exposing ourselves to unacceptable risks.
Your Infinite Research Team and Development Squad
Here’s my challenge to you: imagine what you could accomplish if you had unlimited research analysts and unlimited coders at your immediate disposal. That’s essentially what GenAI-powered LCNC platforms are offering.
Picture this scenario: Your drilling engineer needs a tool to optimize mud weight calculations for a specific geological formation they’ve never encountered before. Instead of waiting months for IT to develop something or trying to adapt an existing tool that doesn’t quite fit, they could:
- Describe their requirements in natural language to an AI-powered LCNC platform
- Have the AI generate the core application logic in minutes
- Test and refine the tool in real-time with actual field data
- Deploy it immediately to solve their operational challenge
Now multiply that capability across every engineer, geologist, and technician in your organization. Every operational challenge becomes an opportunity for immediate tool creation. Every data analysis need becomes solvable in real-time. Every process optimization idea becomes implementable immediately.
But—and this is crucial—only if you have the governance framework in place to ensure those tools are reliable, secure, and based on trustworthy data and AI models.
The Strategic Choice
The convergence of GenAI and LCNC represents the most significant opportunity and the greatest challenge in enterprise application development today. Organizations that establish robust governance structures, comprehensive training programs, and continuous monitoring capabilities will capture competitive advantages through accelerated development cycles and enhanced operational efficiency.
Organizations that fail to address the governance and security challenges risk significant exposure to cyber threats, regulatory violations, and business continuity failures.
The question isn’t whether to adopt this convergence—your teams are probably already experimenting with these tools whether you know it or not. The question is how quickly you can implement the governance frameworks necessary to realize the potential safely.
What could your teams accomplish with unlimited research analysts and unlimited coders at their disposal? More importantly, what safeguards will you put in place to ensure they can be trusted?
JP Garcia is Founding Partner at Ashwood Advisory Group, specializing in digital transformation strategies for energy companies. He focuses on helping organizations balance innovation with operational excellence across complex industrial environments. Connect with JP on LinkedIn or reach out to discuss your organization’s GenAI and citizen development strategy.
Ready to explore the potential of GenAI-powered citizen development while maintaining robust governance? Contact Ashwood Advisory Group to discuss implementation strategies tailored to your organization’s risk tolerance and operational needs.
