Will the AI boom fuel a global energy crisis?

AI’s thirst for energy is ballooning into a monster of a challenge. And it’s not just about the electricity bills. The environmental fallout is serious, stretching to guzzling precious water resources, creating mountains of electronic waste, and, yes, adding to those greenhouse gas emissions we’re all trying to cut. As AI models get ever more complex and weave themselves into yet more parts of our lives, a massive question mark hangs in the air: can we power this revolution without costing the Earth? The numbers don’t lie: AI’s energy demand is escalating fast The sheer computing power needed for the smartest AI out there is on an almost unbelievable upward curve – some say it’s doubling roughly every few months. This isn’t a gentle slope; it’s a vertical climb that’s threatening to leave even our most optimistic energy plans in the dust. To give you a sense of scale, AI’s future energy needs could soon gulp down as much electricity as entire countries like Japan or the Netherlands, or even large US states like California. When you hear stats like that, you start to see the potential squeeze AI could put on the power grids we all rely on. 2024 saw a record 4.3% surge in global electricity demand, and AI’s expansion was a big reason why, alongside the boom in electric cars and factories working harder.  Wind back to 2022, and data centres, AI, and even cryptocurrency mining were already accounting for nearly 2% of all the electricity used worldwide – that’s about 460 terawatt-hours (TWh). Jump to 2024, and data centres on their own use around 415 TWh, which is roughly 1.5% of the global total, and growing at 12% a year. AI’s direct share of that slice is still relatively small – about 20 TWh, or 0.02% of global energy use – but hold onto your hats, because that number is set to rocket upwards. The forecasts? Well, they’re pretty eye-opening. By the end of 2025, AI data centres around the world could demand an extra 10 gigawatts (GW) of power. That’s more than the entire power capacity of a place like Utah. Roll on to 2026, and global data centre electricity use could hit 1,000 TWh – similar to what Japan uses right now. And, by 2027, the global power hunger of AI data centres is tipped to reach 68 GW, which is almost what California had in total power capacity back in 2022.  Towards the end of this decade, the figures get even more jaw-dropping. Global data centre electricity consumption is predicted to double to around 945 TWh by 2030, which is just shy of 3% of all the electricity used on the planet. OPEC reckons data centre electricity use could even triple to 1,500 TWh by then. And Goldman Sachs? They’re saying global power demand from data centres could leap by as much as 165% compared to 2023, with those data centres specifically kitted out for AI seeing their demand shoot up by more than four times. There are even suggestions that data centres could be responsible for up to 21% of all global energy demand by 2030 if you count the energy it takes to get AI services to us, the users. When we talk about AI’s energy use, it mainly splits into two big chunks: training the AI, and then actually using it. Training enormous models, like GPT-4, takes a colossal amount of energy. Just to train GPT-3, for example, it’s estimated they used 1,287 megawatt-hours (MWh) of electricity, and GPT-4 is thought to have needed a whopping 50 times more than that.  While training is a power hog, it’s the day-to-day running of these trained models that can chew through over 80% of AI’s total energy. It’s reported that asking ChatGPT a single question uses about ten times more energy than a Google search (we’re talking roughly 2.9 Wh versus 0.3 Wh). With everyone jumping on the generative AI bandwagon, the race is on to build ever more powerful – and therefore more energy-guzzling – data centres. So, can we supply energy for AI – and for ourselves? This is the million-dollar question, isn’t it? Can our planet’s energy systems cope with this new demand? We’re already juggling a mix of fossil fuels, nuclear power, and renewables. If we’re going to feed AI’s growing appetite sustainably, we need to ramp up and diversify how we generate energy, and fast. Naturally, renewable energy – solar, wind, hydro, geothermal – is a huge piece of the puzzle. In the US, for instance, renewables are set to go from 23% of power generation in 2024 to 27% by 2026.  The tech giants are making some big promises; Microsoft, for example, is planning to buy 10.5 GW of renewable energy between 2026 and 2030 just for its data centres. AI itself could actually help us use renewable energy more efficiently, perhaps cutting energy use by up to 60% in some areas by making energy storage smarter and managing power grids better. But let’s not get carried away. Renewables have their own headaches. The sun doesn’t always shine, and the wind doesn’t always blow, which is a real problem for data centres that need power around the clock, every single day. The batteries we have now to smooth out these bumps are often expensive and take up a lot of room. Plus, plugging massive new renewable projects into our existing power grids can be a slow and complicated business. This is where nuclear power is starting to look more appealing to some, especially as a steady, low-carbon way to power AI’s massive energy needs. It delivers that crucial 24/7 power, which is exactly what data centres crave. There’s a lot of buzz around Small Modular Reactors (SMRs) too, because they’re potentially more flexible and have beefed-up safety features. And it’s not just talk; big names like Microsoft, Amazon, and Google are seriously looking into nuclear options. Matt Garman, who heads up AWS, recently put it

AI in business intelligence: Caveat emptor

One of the ways in which organisations are using the latest AI algorithms to help them grow and thrive is the adoption of privately-held AI models in aligning their business strategies. The differentiation between private and public AI is important in this context – most organisations are rightly wary of allowing public AIs access to what are sensitive data sets, such as HR information, financial data, and details of operational history. It stands to reason that if an AI is given specific data on which to base its responses, its output will be more relevant, and be therefore more effective in helping decision-makers to judge how to strategise. Using private reasoning engines is the logical way that companies can get the best results from AI and keep their intellectual property safe. Enterprise-specific data and the ability to fine-tune a local AI model give organisations the ability to provide bespoke forecasting and operational tuning that are more grounded in the day-to-day reality of a company’s work. A Deloitte Strategy Insight paper calls private AI a “bespoke compass”, and places the use of internal data as a competitive advantage, and Accenture describes AIs as “poised to provide the most significant economic uplift and change to work since the agricultural and industrial revolutions.” There is the possibility, however, that like traditional business intelligence, using historical data drawn from several years of operations across the enterprise, can entrench decision-making in patterns from the past. McKinsey says companies are in danger of “mirroring their institutional past in algorithmic amber.” The Harvard Business Review picks up on some of the technical complexity, stating that the act of customising a model so that it’s activities are more relevant to the company is difficult, and perhaps, therefore, not a task to be taken on by any but the most AI-literate at a level of data science and programming. MIT Sloane strikes a balance between the fervent advocates and the conservative voices for private AI in business strategising. It advises that AI be regarded as a co-pilot, and urges continual questioning and verification of AI output, especially when the stakes are high. Believe in the revolution However, decision-makers considering pursuing this course of action (getting on the AI wave, but doing so in a private, safety-conscious way) may wish to consider the motivations of those sources of advice that advocate strongly for AI enablement in this way. Deloitte, for example, builds and manages AI solutions for clients using custom infrastructure such as its factory-as-a-service offerings, while Accenture has practices dedicated to its clients’ AI strategy, such as Accenture Applied Intelligence. It partners with AWS and Azure, building bespoke AI systems for Fortune 500 companies, among others, and Deloitte is partners with Oracle and Nvidia. With ‘skin in the game’, phrases such as “the most significant […] change to work since the agricultural and industrial revolutions” and a “bespoke compass” are inspiring, but the vendors’ motivations may not be entirely altruistic. Advocates for AI in general rightly point to the ability of models to identify trends and statistical undercurrents much more efficiently than humans. Given the mass of data available to the modern enterprise, comprising both internal and externally-available information, having software that can parse data at scale is an incredible advantage. Instead of manually creating analysis of huge repositories of data – which is time-consuming and error-prove – AI can see through the chaff and surface real, actionable insights. Asking the right questions Additionally, AI models can interpret queries couched in normal language, and make predictions based on empirical information, which, in the context of private AIs, is highly-relevant to the organisation. Relatively unskilled personnel can query data without having skills in statistical analysis or database query languages, and get answers that otherwise would have involved multiple teams and skill-sets drawn from across the enterprise. That time-saving alone is considerable, letting organisations focus on strategy, rather than forming the necessary data points and manually querying the information they’ve managed to gather. Both McKinsey and Gartner warn, however, of overconfidence and data obsolescence. On the latter, historical data may not be relevant to strategising, especially if records go back several years. Overconfidence is perhaps best termed in the context of AI as operators trusting AI responses without question, not delving independently into responses’ detail, or in some cases, taking as fact the responses to badly-phrased queries. For any software algorithm, human phrases such as “base your findings on our historical data” are open to interpretation, unlike, for example, “base your findings on the last twelve months’ sales data, ignoring outliers that differ from the mean by over 30%, although do state those instances for me to consider.” Software of experience Organisations might pursue private AI solutions alongside mature, existing business intelligence platforms. SAP Business Organisations is nearly 30 years old, yet a youngster compared to SAS Business Intelligence that’s been around since before the internet became mainstream in the 1990s. Even relative newcomers such as Microsoft Power BI represents at least a decade of development, iteration, customer feedback, and real-world use in business analysis. It seems sensible, therefore, that private AI’s deployment on business data should be regarded as an addition to the strategiser’s toolkit, rather than a silver bullet that replaces “traditional” tools. For users of private AI that have the capacity to audit and tweak their model’s inputs and inner algorithms, retaining human control and oversight is important – just as it is with tools like Oracle’s Business Intelligence suite. There are some scenarios where the intelligent processing of and acting on real-time data (online retail pricing mechanisms, for example) gives AI analysis a competitive edge over the incumbent BI platforms. But AI has yet to develop into a magical Swiss Army Knife for business strategy. Until AI purposed for business data analysis is as developed, iterated on, battle-hardened, and mature as some of the market’s go-to BI platforms, early adopters might temper the enthusiasm of AI and AI service vendors with practical experience and a critical

Why Microsoft is cutting roles despite strong earnings

Microsoft is cutting about 7,000 jobs, or 3% of its workforce. The move isn’t about poor performance or falling revenue. It’s a clear shift in strategy—fewer layers, more engineers, and more investment in artificial intelligence. The layoffs affect staff across divisions and global offices. But the bulk of those let go are in middle management and non-technical roles, a pattern showing up across tech. The message: reduce overhead, speed up product cycles, and make room for bigger AI spending. The numbers behind the shift Microsoft ended its latest quarter with $70.07 billion in revenue. That beat Wall Street estimates and shows strong business health, and the company plans to spend as much as $80 billion this fiscal year—mainly on data centres designed for training and running AI models. That’s a big leap in infrastructure spending but it also explains why Microsoft is trimming elsewhere. AI models are compute-heavy and demand new types of hardware. Storage, cooling, and power need to scale: Building that capacity takes money, time, and fewer internal delays, and Microsoft appears to be cutting anything that slows the push. Management in the firing line Most cuts hit middle managers and support staff. These are roles that help coordinate, review, and report—but don’t directly write code or design systems. While these positions have long helped large companies function, they’re now being seen as blockers to fast action. Sources told Business Insider that Microsoft wants a higher ratio of technical staff to managers. This isn’t just about saving costs, it’s about reducing the number of people between engineers and final decisions. Analyst Rishi Jaluria told the Financial Times that tech giants like Microsoft have “too many layers.” He said companies are trying to strip back bureaucracy as they chase AI leadership. Microsoft has not publicly broken down which departments were most affected. But reports suggest LinkedIn, a Microsoft subsidiary, saw job cuts as part of this broader shift. Aligning with a broader industry trend Microsoft isn’t the only company trimming management, as Amazon, Google, and Meta have all done similarly. They’re removing layers and pushing more decisions closer to those building the product. For Microsoft, the changes come after several earlier rounds of cuts. In early 2024, the company laid off around 2,000 workers in performance-based trims. This new wave is different as it targets structure, not staff output. $80 billion on AI infrastructure Microsoft’s investment plan puts AI at the centre of its growth. According to Reuters, the company wants to spend up to $80 billion in fiscal 2025, much of it going toward AI-enabled data centres. These centres power large language models, natural language tools, and enterprise AI systems. Without them, even the best models won’t run at scale. The company’s move shows how serious it is about owning the AI backbone. This is about more than software updates, it’s about physical hardware, cloud capacity, and tight control over how AI gets built and used. Microsoft’s early partnership with OpenAI gave it a jumpstart, but Google, Meta, Amazon, and Apple are all making big AI moves. Microsoft appears to be betting that first-mover advantage is only as strong as the infrastructure behind it. Employee reactions reflect mixed sentiment As with most layoffs, employee reactions vary. Some posts on social media reflect understanding, others voice concern about job security and team stability. Several ex-employees described the mood as “tense but expected.” Many said they had been preparing for changes since Microsoft’s 2024 performance cuts. Some worry that too much focus on AI will weaken support roles, and others believe cutting managers will create confusion rather than clarity. Still, public sentiment shows a growing acceptance that AI is changing what jobs look like—even at the biggest firms. What this means for the industry Microsoft’s restructuring sets a tone: Strong revenue no longer guarantees job security, and growth in AI now drives org charts, not the other way around. Middle management is no longer safe, and non-technical roles must prove direct value to AI goals. Even product teams may face more pressure to automate or streamline. For employees, the message is clear. Learn how AI fits your job—or risk being cut from the plan. For other tech firms, Microsoft’s strategy may serve as a roadmap. Spending more on AI means spending less elsewhere. and many companies will likely follow that playbook to stay competitive. Long-term questions remain The short-term logic is clear. Microsoft is cutting structure to fund AI growth. But over time, companies will need to balance innovation with internal support. Removing middle managers may speed up some work, but it can also reduce mentorship, training, and context—things that help teams stay aligned. AI may need more data and compute. But people still build the tools, ask the right questions, and set the goals. How companies treat those people now will shape how well they compete later. (Photo by Ron Lach) See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

Congress pushes GPS tracking for every exported semiconductor

America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory.  Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors into devices that report their location back to Washington. On May 15, 2025, a bipartisan group of eight House representatives introduced the Chip Security Act, which would require companies like Nvidia to embed location verification mechanisms in their processors before export.  This represents perhaps the most invasive approach yet in America’s technological competition with China, moving far beyond restricting where chips can go to actively monitoring where they end up. The mechanics of AI chip surveillance Under the proposed Chip Security Act, AI chip surveillance would become mandatory for all “covered integrated circuit products”—including those classified under Export Control Classification Numbers 3A090, 3A001.z, 4A090, and 4A003.z. Companies like Nvidia would be required to embed location verification mechanisms in their AI chips before export, reexport, or in-country transfer to foreign nations. Representative Bill Huizenga, the Michigan Republican who introduced the House bill, stated that “we must employ safeguards to help ensure export controls are not being circumvented, allowing these advanced AI chips to fall into the hands of nefarious actors.”  His co-lead, Representative Bill Foster—an Illinois Democrat and former physicist who designed chips during his scientific career—added, “I know that we have the technical tools to prevent powerful AI technology from getting into the wrong hands.” The legislation goes far beyond simple location tracking. Companies would face ongoing surveillance obligations, required to report any credible information about chip diversion, including location changes, unauthorized users, or tampering attempts.  This creates a continuous monitoring system that extends indefinitely beyond the point of sale, fundamentally altering the relationship between manufacturers and their products. Cross-party support for technology control Perhaps most striking about this AI chip surveillance initiative is its bipartisan nature. The bill enjoys broad support across party lines, co-led by House Select Committee on China Chairman John Moolenaar and Ranking Member Raja Krishnamoorthi. Other cosponsors include Representatives Ted Lieu, Rick Crawford, Josh Gottheimer, and Darin LaHood. Moolenaar said that “the Chinese Communist Party has exploited weaknesses in our export control enforcement system—using shell companies and smuggling networks to divert sensitive US technology.”  The bipartisan consensus on AI chip surveillance reflects how deeply the China challenge has penetrated American political thinking, transcending traditional partisan divisions. The Senate has already introduced similar legislation through Senator Tom Cotton, suggesting that semiconductor surveillance has broad congressional support. Coordination between chambers indicates that some form of AI chip surveillance may become law regardless of which party controls Congress. Technical challenges and implementation questions The technical requirements for implementing AI chip surveillance raise significant questions about feasibility, security, and performance. The bill mandates that chips implement “location verification using techniques that are feasible and appropriate” within 180 days of enactment, but provides little detail on how such mechanisms would work without compromising chip performance or introducing new vulnerabilities. For industry leaders like Nvidia, implementing mandatory surveillance technology could fundamentally alter product design and manufacturing processes. Each chip would need embedded capabilities to verify its location, potentially requiring additional components, increased power consumption, and processing overhead that could impact performance—precisely what customers in AI applications cannot afford. The bill also grants the Secretary of Commerce broad enforcement authority to “verify, in a manner the Secretary determines appropriate, the ownership and location” of exported chips. This creates a real-time surveillance system where the US government could potentially track every advanced semiconductor worldwide, raising questions about data sovereignty and privacy. Commercial surveillance meets national security AI chip surveillance proposal represents an unprecedented fusion of national security imperatives with commercial technology products. Unlike traditional export controls that simply restrict destinations, the approach creates ongoing monitoring obligations that blur the lines between private commerce and state surveillance. Representative Foster’s background as a physicist lends technical credibility to the initiative, but it also highlights how scientific expertise can be enlisted in geopolitical competition. The legislation reflects a belief that technical solutions can solve political problems—that embedding surveillance capabilities in semiconductors can prevent their misuse. Yet the proposed law raises fundamental questions about the nature of technology export in a globalized world. Should every advanced semiconductor become a potential surveillance device?  How will mandatory AI chip surveillance affect innovation in countries that rely on US technology? What precedent does this set for other nations seeking to monitor their technology exports? Accelerating technological decoupling The mandatory AI chip surveillance requirement could inadvertently accelerate the development of alternative semiconductor ecosystems. If US chips come with built-in tracking mechanisms, countries may intensify efforts to develop domestic alternatives or source from suppliers without such requirements. China, already investing heavily in semiconductor self-sufficiency following years of US restrictions, may view these surveillance requirements as further justification for technological decoupling. The irony is striking: efforts to track Chinese use of US chips may ultimately reduce their appeal and market share in global markets. Meanwhile, allied nations may question whether they want their critical infrastructure dependent on chips that can be monitored by the US government. The legislation’s broad language suggests that AI chip surveillance would apply to all foreign countries, not just adversaries, potentially straining relationships with partners who value technological sovereignty. The future of semiconductor governance As the Trump administration continues to formulate its replacement for Biden’s AI Diffusion Rule, Congress appears unwilling to wait. The Chip Security Act represents a more aggressive approach than traditional export controls, moving from restriction to active surveillance in ways that could reshape the global semiconductor industry. This evolution reflects deeper changes in how nations view technology exports in an era of great power competition. The semiconductor industry, once governed primarily by market forces and technical standards, increasingly operates under geopolitical imperatives that prioritize control over commerce. Whether AI chip surveillance becomes law depends on congressional action and industry response. But the

Can the US really enforce a global AI chip ban?

When Huawei shocked the global tech industry with its Mate 60 Pro smartphone featuring an advanced 7-nanometer chip despite sweeping US technology restrictions, it demonstrated that innovation finds a way even under the heaviest sanctions. The US response was swift and predictable: tighter export controls and expanded restrictions. Now, with reports suggesting Huawei’s Ascend AI chips are approaching Nvidia-level performance—though the Chinese company remains characteristically silent about these developments—America has preemptively escalated its semiconductor war to global proportions.  The Trump administration’s declaration that using Huawei’s Ascend chips “anywhere in the world” violates US export controls reveals more than policy enforcement—it exposes a fundamental fear that American technological dominance may no longer be guaranteed through restrictions alone. This global AI chip ban emerged on May 14, 2025, when President Donald Trump’s administration rescinded the Biden-era AI Diffusion Rule without revealing details of a replacement policy.  Instead, the Bureau of Industry and Security (BIS) announced guidance to “strengthen export controls for overseas AI chips,” specifically targeting Huawei’s Ascend processors.  The new guidelines warn of “enforcement actions” including imprisonment and fines for any global business found using these Chinese-developed chips—a fundamental departure from traditional export controls, which typically govern what leaves a country’s borders, not what happens entirely outside them. The scope of America’s tech authority The South China Morning Post reports that these new guidelines explicitly single out Huawei’s Ascend chips after scrapping the Biden administration’s country-tiered “AI diffusion” rule. But the implications of this global AI chip ban extend far beyond bilateral US-China tensions.  By asserting jurisdiction over global technology choices, America essentially demands that sovereign nations and independent businesses worldwide comply with its domestic policy preferences. This extraterritorial approach raises fundamental questions about national sovereignty and international trade. Should a Brazilian AI startup be prevented from using the most cost-effective chip solution simply because those chips are manufactured by a Chinese company?  Should European research institutions abandon promising collaborations because they involve hardware Washington deems unacceptable? According to Financial Times reporting, BIS stated that Huawei’s Ascend 910B, 910C, and 910D were all subject to the regulations as they were likely “designed with certain US software or technology or produced with semiconductor manufacturing equipment that is the direct product of certain US-origin software or technology, or both.” Industry resistance to universal controls Even within the United States, the chipmaking sector expresses alarm about Washington’s semiconductor policies. The aggressive expansion of export controls creates uncertainty beyond Chinese companies, affecting global supply chains and innovation partnerships built over decades. “Washington’s new guidelines are essentially forcing global tech firms to pick a side – Chinese or US hardware – which will further deepen the tech divide between the world’s two largest economies,” analysts note. This forced binary choice ignores the nuanced reality of modern technology development, where innovation emerges from diverse, international collaborations. The economic implications prove staggering. Recent analysis indicates Huawei’s Ascend 910B AI chip delivers 80% of Nvidia A100’s efficiency when training large language models, though “in some other tests, Ascend chips can beat the A100 by 20%.”  By blocking access to competitive alternatives, this global AI chip ban may inadvertently stifle innovation and maintain artificial market monopolies. The innovation paradox Perhaps most ironically, policies intended to maintain American technological leadership may undermine it. Nvidia CEO Jensen Huang acknowledged earlier this month that Huawei was “one of the most formidable technology companies in the world,” noting that China was “not behind” in AI development. Attempting to isolate such capabilities through global restrictions may accelerate the development of parallel technology ecosystems, ultimately reducing American influence rather than preserving it.  The secrecy surrounding Huawei’s Ascend chips—with the company keeping “details of its AI chips close to its chest, with only public information coming from third-party teardown reports”—has intensified with US sanctions. Following escalating restrictions, Huawei stopped officially disclosing information about the series, including release dates, production schedules, and fabrication technologies. The chips specified in current US restrictions, including the Ascend 910C and 910D, haven’t even been officially confirmed by Huawei. Geopolitical ramifications In a South China Morning Post’s report, Chim Lee, a senior analyst at the Economist Intelligence Unit, warns that “if the guidance is enforced strictly, it is likely to provoke retaliation from China” and could become “a negotiating point in ongoing trade talks between Washington and Beijing.”  This assessment underscores the counterproductive nature of aggressive unilateral action in an interconnected global economy. The semiconductor industry thrives on international collaboration, shared research, and open competition. Policies that fragment this ecosystem serve no one’s long-term interests—including America’s.  As the global community grapples with challenges from climate change to healthcare innovation, artificial barriers preventing the best minds from accessing optimal tools ultimately harm human progress. Beyond binary choices The question isn’t whether nations should protect strategic interests—they should and must. But when export controls extend “anywhere in the world,” we cross from legitimate national security policy into technological authoritarianism. The global technology community deserves frameworks that balance security concerns with innovation imperatives. This global AI chip ban risks accelerating the technological fragmentation it seeks to prevent. History suggests markets divided by political decree often spawn parallel innovation ecosystems that compete more effectively than those operating under artificial constraints. Rather than extending controls globally, a strategic approach would focus on out-innovating competitors through superior technology and international partnerships. The current path toward technological bifurcation serves neither American interests nor global innovation—it simply creates a more fragmented, less efficient world where artificial barriers replace natural competition. The semiconductor industry’s future depends on finding sustainable solutions that address legitimate security concerns without dismantling the collaborative networks that drive technological advancement. As this global AI chip ban takes effect, the world watches to see whether innovation will flourish through competition or fragment through control. See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events

AI tool speeds up government feedback, experts urge caution

An AI tool aims to wade through mountains of government feedback and understand what the public is trying to say. UK Technology Secretary Peter Kyle said: “No one should be wasting time on something AI can do quicker and better, let alone wasting millions of taxpayer pounds on outsourcing such work to contractors. This digital assistant, aptly named ‘Consult’, just aced its first big test with the Scottish Government. The Scottish Gov threw Consult in at the deep end, asking it to make sense of public opinion on regulating non-surgical cosmetic procedures such as lip fillers and laser hair removal. Consult came back with findings almost identical to what human officials had pieced together. Now, the plan is to roll this tech out across various government departments. The current way of doing things is expensive and slow. Millions of pounds often go to outside contractors just to analyse what the public thinks. Consult is part of a bigger push to build a leaner, more responsive UK government—one that can deliver on its ‘Plan for Change’ without breaking the bank or taking an age to do it. So, how did it fare in Scotland? Consult chewed through responses from over 2,000 people. Using generative AI, it picked out the main themes and concerns bubbling up from the feedback across six key questions. Of course, Consult wasn’t left completely to its own devices. Experts in the Scottish Government double-checked and fine-tuned these initial themes. Then, the AI got back to work to sort individual responses into these categories. Officials ended up with more precious time to consider what people were saying and what it meant for policy. Because this was Consult’s first live outing, the Scottish Government went through every single response by hand too—just to be sure. Figuring out exactly what someone means in a written comment and then deciding which ‘theme’ it fits under can be a bit subjective. Even humans don’t always agree. When the government compared Consult’s handiwork to human analysis, the AI was right most of the time. Where there were differences, they were so minor they didn’t change the overall picture of what mattered most to people. Consult is part of a bigger AI toolkit called ‘Humphrey’—a suite of digital helpers designed to free up civil servants from admin and cut down on those contractor bills. It’s all part of a grander vision to use technology to sharpen up public services, aiming to find £45 billion in productivity savings. The goal is a more nimble government that is better at delivering that ‘Plan for Change’ we keep hearing about. “After demonstrating such promising results, Humphrey will help us cut the costs of governing and make it easier to collect and comprehensively review what experts and the public are telling us on a range of crucial issues,” added Kyle. “The Scottish Government has taken a bold first step. Very soon, I’ll be using Consult, within Humphrey, in my own department and others in Whitehall will be using it too – speeding up our work to deliver the Plan for Change.” Over in Scotland, Public Health Minister Jenni Minto said: “Using the tool was very beneficial in helping the Scottish Government understand more quickly what people wanted us to hear and our respondents’ range of views. “Using this tool has allowed the Scottish Government to move more quickly to a focus on the policy questions and dive into the detail of the evidence we’ve been presented with, while remaining confident that we have heard the strong views expressed by respondents.” Of course, like many AI deployments in government, it’s still early days, and Consult is officially still in the trial phase. More number-crunching and testing are on the cards to make sure it’s working just as it should before any big decisions about a full rollout are made. But the potential here is huge. The government runs about 500 consultations every year. If Consult lives up to its promise, it could save officials a staggering 75,000 days of analysis annually. And what did the civil servants who first worked with Consult think? They were reportedly “pleasantly surprised,” finding the AI’s initial analysis a “useful starting point.” Others raved that it “saved [them] a heck of a lot of time” and let them “get to the analysis and draw out what’s needed next” much faster. Interestingly, they also felt Consult brought a new level of fairness to the table. As one official put it, its use “takes away the bias and makes it more consistent,” preventing individual analysts from, perhaps unconsciously, letting their “own preconceived ideas” colour the findings. Some consultations receive tens, even hundreds of thousands of responses. Given how well Consult has performed in these early tests, it won’t be long before it’s used on these massive consultations. It’s worth noting that humans aren’t being kicked out of the loop. Consult has been built to keep the experts involved every step of the way. Officials will always review the themes the AI suggests and how it sorts the responses. They’ll have an interactive dashboard to play with, letting them filter and search for specific insights. It’s about AI doing the heavy lifting, so the humans can do the smart thinking. Experts urge caution about the use of AI in government This move towards AI in government isn’t happening in a vacuum, and experts are watching closely. Stuart Harvey, CEO of Datactics, commented: “Using AI to speed up public consultations is a great example of how technology can improve efficiency and save money. But AI is only as good as the data behind it. For tools like this to work well and fairly, government departments need to make sure their data is accurate, up-to-date, and properly managed. “People need to trust the decisions made with AI. That means making sure the process is clear, well-governed, and ethical. If the data is messy or poorly handled, it can lead to biased or unreliable outcomes. “As the government expands

Open-source AI video tool for all

Alibaba has unveiled Wan2.1-VACE, an open-source AI model designed to shake up how we create and edit videos. VACE isn’t appearing out of thin air; it’s part of Alibaba’s broader Wan2.1 family of video AI models. And they’re making a rather bold claim for it, stating it’s the “first open-source model in the industry to provide a unified solution for various video generation and editing tasks.” If Alibaba can succeed in shifting users away from having to juggle multiple, separate tools towards one streamlined hub—it could be a true game-changer. So, what can this thing actually do? Well, for starters, it can whip up videos using all sorts of prompts, including text commands, still pictures, and even snippets of other video clips. But it’s not just about making videos from scratch. The editing toolkit supports referencing images or specific frames to guide the AI, advanced video “repainting” (more on that in a sec), tweaking just selected bits of your existing video, and even stretching out the video. Alibaba reckons these features “enable the flexible combination of various tasks to enhance creativity.” Imagine you want to create a video with specific characters interacting, maybe based on some photos you have. VACE claims to be able to do that. Got a still image you wish was dynamic? Alibaba’s open-source AI model can add natural-looking movement to bring it to life.  For those who love to fine-tune, there are those advanced “video repainting” functions I mentioned earlier. This includes things like transferring poses from one subject to another, having precise control over motion, adjusting depth perception, and even changing the colours. One feature that caught my eye is its ability to “supports adding, modification or deletion to selective specific areas of a video without affecting the surroundings.” That’s a massive plus for detailed edits – no more accidentally messing up the background when you’re just trying to tweak one small element. Plus, it can make your video canvas bigger and even fill in the new space with relevant content to make everything look richer and more expansive. You could take a flat photograph, turn it into a video, and tell the objects in it exactly how to move by drawing out a path. Need to swap out a character or an object with something else you provide as a reference? No problem. Animate those referenced characters? Done. Control their pose precisely? You got it. Alibaba even gives the example of its open-source AI model taking a tall, skinny vertical image and cleverly expanding it sideways into a widescreen video, automagically adding new bits and pieces by referencing other images or prompts. That’s pretty neat. Of course, VACE isn’t just magic. There’s some clever tech involved, designed to handle the often-messy reality of video editing. A key piece is something Alibaba calls the Video Condition Unit (VCU), which “supports unified processing of multimodal inputs such as text, images, video, and masks.” Then there’s what they term a “Context Adapter structure.” This clever bit of engineering “injects various task concepts using formalised representations of temporal and spatial dimensions.” Essentially, think of it as giving the AI a really good understanding of time and space within the video. With all this clever tech, Alibaba reckons VACE will be a hit in quite a few areas. Think quick social media clips, eye-catching ads and marketing content, heavy-duty post-production special effects for film and TV, and even for generating custom educational and training videos. Alibaba makes Wan2.1-VACE open-source to spread the AI love Building AI models this powerful usually costs a fortune and needs massive computing power and tons of data. So, Alibaba making Wan2.1-VACE open source? That’s a big deal. “Open access helps lower the barrier for more businesses to leverage AI, enabling them to create high-quality visual content tailored to their needs, quickly and cost-effectively,” Alibaba explains. Basically, Alibaba is hoping to let more folks – especially smaller businesses and individual creators – get their hands on top-tier AI without breaking the bank. This democratisation of powerful tools is always a welcome sight. And they’re not just dropping one version. There’s a hefty 14-billion parameter model for those with serious horsepower, and a more nimble 1.3-billion parameter one for lighter setups. You can grab them for free right now on Hugging Face and GitHub, or via Alibaba Cloud’s own open-source community, ModelScope. (Image source: www.alibabagroup.com) See also: US slams brakes on AI Diffusion Rule, hardens chip export curbs Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

Innovation vs oversight in drug regulation

The US Food and Drugs Administration (FDA) has stated that it wants to accelerate the deployment of AI across its centres. FDA Commissioner Martin A. Makary has announced an aggressive timeline to scale use of AI by 30 June 2025 and is betting big on the technology to change drug approval processes for the US. But the rapid AI deployment at the FDA raises important questions about whether innovation can be balanced with oversight. Strategic leadership drive: FDA names first AI chief The foundation for the ambitious FDA AI deployment was laid with the appointment of Jeremy Walsh as the first-ever Chief AI Officer. Walsh previously led enterprise-scale technology deployments in federal health and intelligence agencies and came from government contractor Booz Allen Hamilton, where he worked for 14 years as chief technologist. His appointment, announced just before the May 8th rollout announcement, signals the agency’s serious commitment to technological transformation. The timing is significant – Walsh’s hiring coincided with workforce cuts at the FDA, including the loss of key tech talent. Among the losses was Sridhar Mantha, the former director of strategic programmes at the Center for Drug Evaluation and Research, who had co-chaired the AI Council at CDER and helped develop policy around AI’s use in drug development. Ironically, Mantha is now working alongside Walsh to coordinate the agency-wide rollout. The pilot programme: Impressive results, limited details What’s driving the rapid AI deployment is the reported success of the agency’s pilot programme trialling the software. Commissioner Makary said he was “blown away by the success of our first AI-assisted scientific review pilot,” with one official claiming the technology enabled him to perform scientific review tasks in minutes that used to take three days. However, the scope, rigour and results from the pilot scheme remain unreleased. The agency has not published detailed reports on the pilot’s methodology, validation procedures, or specific use cases tested. The lack of transparency is concerning given the high-stakes nature of drug evaluation. When pressed for details, the FDA has promised that additional details and updates on the initiative will be shared publicly in June. For an agency responsible for protecting public health through rigorous scientific review, the absence of published pilot data raises questions about the evidence base supporting such an aggressive timeline. Industry perspective: Cautious optimism meets concerns The pharmaceutical industry’s reaction to the FDA AI deployment reflects a mixture of optimism and apprehension. Companies have long sought faster approval processes, with Makary pointedly asking, “Why does it take over 10 years for a new drug to come to market?” “While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the centre. We’re pleased to see the FDA taking concrete action to harness the potential of AI,” said PhRMA spokesperson Andrew Powaleny. However, industry experts are raising practical concerns. Mike Hinckle, an FDA compliance expert at K&L Gates, highlighted a key issue: pharmaceutical companies will want to know how the proprietary data they submit will be secured. The concern is particularly acute given reports that the FDA was in discussions with OpenAI about a project called cderGPT, which appears to be an AI tool for the Centre for Drug Evaluation and Research. Expert warnings: The rush vs rigour debate Leading experts in the field are expressing concern about the pace of deployment. Eric Topol, founder of the Scripps Research Translational Institute, told Axios: “The idea is good, but the lack of details and the perceived ‘rush’ is concerning.” He identified critical gaps in transparency, including questions about which models are being used to train the AI, and what inputs are provided for specialised fine-tuning. Former FDA commissioner Robert Califf struck a balanced tone: “I have nothing but enthusiasm tempered by caution about the timeline.” His comment reflects the broader sentiment among experts who support AI integration but question whether the June 30th deadline allows sufficient time for proper validation and safeguards to be implemented. Rafael Rosengarten from the Alliance for AI in Healthcare supports automation but emphasises the need for governance, saying there is a need for policy guidance around what kind of data is used to train AI models and what kind of model performance is considered acceptable. Political context: Trump’s deregulatory AI vision The FDA AI deployment must be understood in the broader context of the Trump administration’s approach to AI governance. Trump’s overhaul of federal AI policy – ditching Biden-era guardrails in favour of speed and international dominance in technology – has turned the government into a tech testing ground. The administration has explicitly prioritised innovation over precaution. Vice President JD Vance outlined four key AI policy priorities, including encouraging “pro-growth AI policies” instead of “excessive regulation of the AI sector,” and he has taken action to ensure the forthcoming White House AI Action Plan would “avoid an overly precautionary regulatory regime.” The philosophy is evident in how the FDA is approaching its AI deployment. With Elon Musk leading a charge under an “AI-first” flag, critics warn that rushed rollouts at agencies could compromise data security, automate important decisions, and put Americans at risk. Safeguards and governance: What’s missing? While the FDA has promised that its AI systems will maintain strict information security and act in compliance with FDA policy, specific details about safeguards remain sparse. The agency’s claims that AI is a tool to support, not replace, human expertise and can enhance regulatory rigour by helping predict toxicities and adverse events. This provides some reassurance but lacks specificity. The absence of published governance frameworks for what is an internal process contrasts sharply with the FDA’s guidance for industry. The agency has previously issued draft guidance to pharma companies, providing recommendations on the use of AI intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality. Its published draft guidance in that instance was based on feedback from over 800 external comments and its experience with more than 500 drug submissions involving AI components in their development

Saudi Arabia moves to build its AI future with HUMAIN and NVIDIA

Saudi Arabia’s new state subsidiary, HUMAIN, is collaborating with NVIDIA to build AI infrastructure, nurture talent, and launch large-scale digital systems. The effort includes plans to set up AI “factories” powered by up to 500 megawatts of energy. The sites will be filled with NVIDIA GPUs, including the Grace Blackwell GB300 supercomputers connected via NVIDIA’s InfiniBand network. The goal is to create a base for training models, running simulations, and managing complex AI deployments. A major part of the push is about control. Saudi Arabia wants to build sovereign AI – models trained using local data, language, and systems. By building its own infrastructure, it avoids relying on foreign cloud providers. The shift aligns with a broader trend, as governments around the world start to question how AI tools are built, where data goes, and who controls it. HUMAIN is meant to give Saudi Arabia more say in that process. While other countries have launched national AI strategies, HUMAIN stands out for its structure. It’s not just a policy office or research fund; instead, it operates across the full AI value chain – building data centres, managing data, training models, and deploying applications. Few countries have a single body doing likewise with such a broad remit. Singapore’s NAIS 2.0, for example, focuses on public sector use cases and talent development, and the UAE’s approach, which emphasises frameworks and governance. China has set up AI labs in several cities, but they tend to work in silos. HUMAIN brings elements together with a central goal: make Saudi Arabia a producer, not just a user, of AI. The ambition is clear, but it comes with trade-offs. Running GPU-heavy data centres on this scale will use a lot of power. The 500-megawatt figure is far beyond typical enterprise deployments. Globally, the environmental cost of AI has become a growing concern. Microsoft and Google have both reported rising emissions from AI-related infrastructure. Saudi Arabia will need to explain how its AI factories will be powered – especially if it wants to align with its own sustainability targets under Vision 2030. The partnership with NVIDIA isn’t just about machines, it also includes training for people. HUMAIN and NVIDIA say they will run large-scale education programmes to help thousands of Saudi developers gain skills in AI, robotics, simulation, and digital twins. Building local talent is a core part of the effort, and without it, infrastructure likely won’t get used to its full potential. “AI, like electricity and internet, is essential infrastructure for every nation,” said Jensen Huang, founder and CEO of NVIDIA. “Together with HUMAIN, we are building AI infrastructure for the people and companies of Saudi Arabia to realise the bold vision of the Kingdom.” One of the tools HUMAIN plans to deploy is NVIDIA Omniverse, to be used as a multi-tenant platform for industries like logistics, manufacturing, and energy. These sectors could create digital twins – virtual versions of real systems – to test, monitor, and improve operations. The idea is simple: simulate before you build, or run stress tests in digital form to save time and money later. This type of simulation and optimisation supports Saudi Arabia’s broader push into automation and smart industry. It fits in a wider narrative of transitioning from oil to advanced tech as a core pillar of the economy. The deal fits into NVIDIA’s global strategy, and the company has similar partnerships in India, the UAE, and Europe. Saudi Arabia offers strong government support, deep funding, and the promise to become a new AI hub in the Middle East. In return, NVIDIA provides the technical backbone – GPUs, software platforms, and the know-how to run them. The partnership helps both sides. Saudi Arabia gets the tools to build AI from the ground up and build a new economic version of itself, while NVIDIA gains a long-term customer and a foothold in a growing market. There are still gaps to watch. How will HUMAIN govern the use of its models? Will they be open for researchers and startups, or tightly controlled by the state? What role will local universities or private companies play? And can workforce development keep pace with the rapid buildout of infrastructure? HUMAIN isn’t just building for now. The structure suggests a long-term bet – one that links compute power, national priorities, and a shift in how AI is developed and deployed. Saudi Arabia wants more than access. It wants influence. And HUMAIN, in partnership with NVIDIA, is the engine it’s building to get there. (Photo by Mariia Shalabaieva) See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

US slams brakes on AI Diffusion Rule, hardens chip export curbs

The Department of Commerce (DOC) has slammed the brakes on the sweeping “AI Diffusion Rule,” yanking it just a day before it was due to bite. Meanwhile, officials have laid down the gauntlet with stricter measures to control semiconductor exports. The AI Diffusion Rule, a piece of regulation cooked up under the Biden administration, was staring down a compliance deadline of May 15th. According to the folks at the DOC, letting this rule roll out would have been like throwing a spanner in the works of American innovation. DOC officials argue the rule would have saddled tech firms with “burdensome new regulatory requirements” and, perhaps more surprisingly, risked souring America’s relationships on the world stage by effectively “downgrading” dozens of countries “to second-tier status.” The nuts and bolts of this reversal will see the Bureau of Industry and Security (BIS), part of the DOC, publishing a notice in the Federal Register to make the rescission official. While this particular rule is heading for the shredder, the official line is that a replacement isn’t off the table; one will be cooked up and served “in the future.” Jeffery Kessler, the Under Secretary of Commerce for Industry and Security, has told BIS enforcement teams to stand down on anything concerning the now-canned AI Diffusion Rule. “The Trump Administration will pursue a bold, inclusive strategy to American AI technology with trusted foreign countries around the world, while keeping the technology out of the hands of our adversaries,” said Kessler. “At the same time, we reject the Biden Administration’s attempt to impose its own ill-conceived and counterproductive AI policies on the American people.” What was this ‘AI Diffusion Rule’ anyway? You might be wondering what this “AI Diffusion Rule” actually was, and why it’s causing such a stir.  The rule wasn’t just a minor tweak; it was the Biden administration’s bid to get a tight grip on how advanced American tech – everything from the AI chips themselves to cloud computing access and even the crucial AI ‘model weights’ – flowed out of the US to the rest of the world. The idea, at least on paper, was to walk a tightrope: keep the US at the front of the AI pack, protect national security, and still champion American tech exports. But how did it plan to do this? The rule laid out a fairly complex playbook: A tiered system for nations: Imagine a global league table for AI access. Countries were split into three groups. Tier 1 nations, America’s closest allies like Japan and South Korea, would have seen hardly any new restrictions. Tier 3, unsurprisingly, included countries already under arms embargoes – like China and Russia – who were already largely banned from getting US chips and would face the toughest controls imaginable. The squeezed middle: This is where things got sticky. A large swathe of countries, including nations like Mexico, Portugal, India, and even Switzerland, found themselves in Tier 2. For them, the rule meant new limits on how many advanced AI chips they could import, especially if they were looking to build those super-powerful, large computing clusters essential for AI development. Caps and close scrutiny: Beyond the tiers, the rule introduced actual caps on the quantity of high-performance AI chips most countries could get their hands on. If anyone wanted to bring in chips above certain levels, particularly for building massive AI data centres, they’d have faced incredibly strict security checks and reporting duties. Controlling the ‘brains’: It wasn’t just about the hardware. The rule also aimed to regulate the storage and export of advanced AI model weights – essentially the core programming and learned knowledge of an AI system. There were strict rules about not storing these in arms-embargoed countries and only allowing their export to favoured allies, and even then, only under tight conditions. Tech as a bargaining chip: Underneath it all, the framework was also a bit of a power play. The US aimed to use access to its coveted AI technology as a carrot, encouraging other nations to sign up to American standards and safeguards if they wanted to keep the American chips and software flowing. The Biden administration had a clear rationale for these moves. They wanted to stop adversaries, with China being the primary concern, from getting their hands on advanced AI that could be turned against US interests or used for military purposes. It was also about cementing US leadership in AI, making sure the most potent AI systems and the infrastructure to run them stayed within the US and its closest circle of allies, all while trying to keep US tech exports competitive. However, the AI Diffusion Rule and broader plan didn’t exactly get a standing ovation. Far from it. Major US tech players – including giants like Nvidia, Microsoft, and Oracle – voiced strong concerns. They argued that the rule, instead of protecting US interests, would stifle innovation, bog businesses down in red tape, and ultimately hurt the competitiveness of American companies on the global stage. Crucially, they also doubted it would effectively stop China from accessing advanced AI chips through other means. And it wasn’t just industry. Many countries weren’t thrilled about being labelled “second-tier,” a status they felt was not only insulting but also risked undermining diplomatic ties. There was a real fear it could push them to look for AI technology elsewhere, potentially even from China, which was hardly the intended outcome. This widespread pushback and the concerns about hampering innovation and international relations are exactly what the current Department of Commerce is pointing to as reasons for today’s decisive action to scrap the rule. Fresh clampdown on AI chip exports It wasn’t just about scrapping old rules, though. The BIS also rolled out a new playbook to tighten America’s grip on AI chip exports, showing they’re serious about guarding the nation’s tech crown jewels.  The latest clampdown includes: A spotlight on Huawei Ascend chips: New guidance makes it crystal clear: using Huawei Ascend chips anywhere on

My cart
Your cart is empty.

Looks like you haven't made a choice yet.