GITEX GLOBAL in Asia: the largest tech show in the world

23-25 April 2025 | Marina Bay Sands, Singapore GITEX ASIA 2025 will bring together 700+ tech companies, featuring 400+ startups and digital promotion agencies, and 250+ global investors & VCs from 60+ countries. The event will serve as a bridge between the Eastern and Western technology ecosystems and feature 180+ hours of expert insights from 220 global thought leaders. GITEX ASIA 2025 is set to foster cross-border collaboration, investment, and innovation, connecting global tech enterprises, unicorn founders, policymakers, SMEs, and academia to shape the future of digital transformation in Asia. GITEX ASIA 2025 will comprise of five co-located events: AI EVERTYTHING SINGAPORE – the AI showcase. NORTHSTAR ASIA – for startups and investors GITEX CYBER VALLEY ASIA – helping create a defence ecosystem for governments and businesses GITEX QUANTUM QUANTUM EXPO ASIA – Asia’s quantum frontier GITEX DIGI HEALTH & BIOTECH SINGAPORE – the healthcare revolution GITEX ASIA 2025 will host a lineup of conferences and summits, exploring a range of transformative trends in technology and investment. Key themes will include AI, cloud & connectivity, cybersecurity, quantum, health tech & biotech, green tech & smart cities, startups & investors, and SMEs. Sessions will include Asia Digital AI Economy, AI Everything: AI Adoption & Commercialisation, Cybersecurity: AI-Enabled Cybersecurity & Critical Infrastructure, Digital Health, and the Supernova Pitch Competition. The event will bring together leading voices and ideas from different industries, including public services, retail, finance, education, health, and manufacturing. Be part of the action at GITEX ASIA 2025 and witness the future of technology unfold in Singapore. For more information and updates on GITEX ASIA, visit www.gitexasia.com Social media links: LinkedIn | X | Facebook | Instagram | YouTube Source link

Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation

Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4. A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI. In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity. The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology? It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all. So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins. Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies. AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline. But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone. AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word. As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically? Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not. We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer. This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side—the yin and the yang, if you will. There are always benefits and risks. The challenge is learning how to maximise the benefits for humanity, society, and business while mitigating the risks. That’s what we must focus on—ensuring AI works in service of people rather than against them. The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work? I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education. AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies. Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model. It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction. With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making? Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information. I compare it to when

WAIE 2025: With four focus areas – booth reservations now open

The 6th Global Digital Economy Industry Conference 2025 & World AI Industry Exhibition (WAIE) is scheduled from July 30 to August 1 at Shenzhen (Futian) Convention and Exhibition Centre. The event presents an opportunity for manufacturers to find industry partners and collaboration opportunities, and secure domestic and international orders. The Greater Bay Area has a reputation as a place where industries can transform and grow. Its vibrant large economy, global outreach, and important manufacturing capabilities make it the natural centre of innovation and economic growth. As a breeding ground for scientific and technological progress, it’s the place chosen to bring innovators, manufacturers, and heads of industry together under one roof. Now in its sixth year, WAIE 2025 will focus on “Key Technologies for Intelligent Industry,” and be a showcase for a range of products and services that can be demonstrated in real-world, practical scenarios.  The Digital Economy: Empowering Intelligent Industry Development WAIE 2025 will showcase a range of solutions including sensor instruments and advanced lasers, industrial optoelectronics, robotics and intelligent factories, industrial internet, software & cloud, and photovoltaic energy storage. The aim is to connect core technologies and new products with industry and supply chains. The Five Exhibition Areas Industrial Chip and Sensor Instrument Expo Intelligent industrial sensors, MEMS sensors, 3D/machine vision, laser radar, sensing elements, light sources, lenses, visual sensing, millimetre wave radar, and instrumentation & measurement. Integrated circuit design: Wafer manufacturing and advanced packaging technology, power device/MEMS sealing and testing, silicon wafer and IC packaging Semiconductors: Materials, electronic core components and parts, associated equipment. Electronic terminal applications: including 5G, human-computer interaction, AIOT, and other related products. Advanced Laser and Industrial Optoelectronics Expo Laser accessories, modules and components, fiber lasers, solid-state lasers, semiconductor lasers, new laser technologies, sheet metal cutting equipment, laser welding equipment, laser cleaning equipment, and material processing systems. Optics: Optical materials, components and elements, optical lenses, precision optical testing and measurement instruments, and optical processing equipment. Robot and Intelligent Factory Expo Industrial Robots: Industrial robot systems, collaborative robots, mobile robots, humanoid robots, robot system integration, machine vision, manipulators, reducers, related machinery devices & components, and auxiliary equipment and tools. Intelligent Factory Production Line: Industrial automation equipment, intelligent factory total solutions, SMT, non-standard automation production lines, dispensers, automated production lines, inspection/instrumentation/testing and measurement technology, intelligent assembly and transfer equipment, reducers, motors, human-machine interface (HMI), motion servos, enclosures and cabinets, embedded systems, industrial power supplies, connectors, electrical equipment, process and energy automation systems, and intelligent factory management systems. Industrial Internet, Software and Cloud Expo Industrial internet platforms including Cloud computing and big data, data centre infrastructure, data analysis and mining. Industrial Internet Network Interconnectivity including communication network services, IDC, CPS, IOT, AI, and Edge computing. Industrial internet security for network and data security Industrial Internet identification and data collection. Industrial software and integrated solutions, including industrial control software, MRO supply chain management, and embedded software. The digital factory, comprising of R&D and product development (CAX/MES/PLM), enterprise resource planning (ERP), mechanical automation, supply chain management, and intelligent factory management systems. Photovoltaic Energy Storage Industrial Application Expo Solar photovoltaic power generation, including PV production equipment, batteries, modules, raw materials, photovoltaic engineering and systems, and photovoltaic application products. Energy storage power generation, including storage batteries, energy storage technology and materials, storage systems and EPC projects, energy storage equipment and components, and storage application solutions. Lithium battery related products and equipment, technical solutions and applications, and energy management. Over 20 forum events WAIE 2025 will host more than 20 forums and summits on the applications of intelligent industrial digitalisation. There will also be award ceremonies, project matchmaking, roadshow presentations, and product launches. The event will feature more than 100 speakers and play host to over 100 experts, scholars, academicians, and entrepreneurs from diverse fields. The event aims to cover all elements of the industry supply chain, promote the development of new technologies and applications, and inject new energy and innovation into the industry. Highlight Focus Summit + Exhibition, to boost brand exposure and customer acquisition. Click to register for WAIE 2025. Source link

Anthropic provides insights into the ‘AI biology’ of Claude

Anthropic has provided a more detailed look into the complex inner workings of their advanced language model, Claude. This work aims to demystify how these sophisticated AI systems process information, learn strategies, and ultimately generate human-like text. As the researchers initially highlighted, the internal processes of these models can be remarkably opaque, with their problem-solving methods often “inscrutable to us, the model’s developers.” Gaining a deeper understanding of this “AI biology” is paramount for ensuring the reliability, safety, and trustworthiness of these increasingly powerful technologies. Anthropic’s latest findings, primarily focused on their Claude 3.5 Haiku model, offer valuable insights into several key aspects of its cognitive processes. One of the most fascinating discoveries suggests that Claude operates with a degree of conceptual universality across different languages. Through analysis of how the model processes translated sentences, Anthropic found evidence of shared underlying features. This indicates that Claude might possess a fundamental “language of thought” that transcends specific linguistic structures, allowing it to understand and apply knowledge learned in one language when working with another. Anthropic’s research also challenged previous assumptions about how language models approach creative tasks like poetry writing. Instead of a purely sequential, word-by-word generation process, Anthropic revealed that Claude actively plans ahead. In the context of rhyming poetry, the model anticipates future words to meet constraints like rhyme and meaning—demonstrating a level of foresight that goes beyond simple next-word prediction. However, the research also uncovered potentially concerning behaviours. Anthropic found instances where Claude could generate plausible-sounding but ultimately incorrect reasoning, especially when grappling with complex problems or when provided with misleading hints. The ability to “catch it in the act” of fabricating explanations underscores the importance of developing tools to monitor and understand the internal decision-making processes of AI models. Anthropic emphasises the significance of their “build a microscope” approach to AI interpretability. This methodology allows them to uncover insights into the inner workings of these systems that might not be apparent through simply observing their outputs. As they noted, this approach allows them to learn many things they “wouldn’t have guessed going in,” a crucial capability as AI models continue to evolve in sophistication. The implications of this research extend beyond mere scientific curiosity. By gaining a better understanding of how AI models function, researchers can work towards building more reliable and transparent systems. Anthropic believes that this kind of interpretability research is vital for ensuring that AI aligns with human values and warrants our trust. Their investigations delved into specific areas: Multilingual understanding: Evidence points to a shared conceptual foundation enabling Claude to process and connect information across various languages. Creative planning: The model demonstrates an ability to plan ahead in creative tasks, such as anticipating rhymes in poetry. Reasoning fidelity: Anthropic’s techniques can help distinguish between genuine logical reasoning and instances where the model might fabricate explanations. Mathematical processing: Claude employs a combination of approximate and precise strategies when performing mental arithmetic. Complex problem-solving: The model often tackles multi-step reasoning tasks by combining independent pieces of information. Hallucination mechanisms: The default behaviour in Claude is to decline answering if unsure, with hallucinations potentially arising from a misfiring of its “known entities” recognition system. Vulnerability to jailbreaks: The model’s tendency to maintain grammatical coherence can be exploited in jailbreaking attempts. Anthropic’s research provides detailed insights into the inner mechanisms of advanced language models like Claude. This ongoing work is crucial for fostering a deeper understanding of these complex systems and building more trustworthy and dependable AI. (Photo by Bret Kavanaugh) See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

Infinix NOTE 50 Series: Bringing ‘Gen Beta’ mobile AI to life

Infinix is ushering in its ‘Gen Beta’ mobile AI strategy with the Note 50 Series, promising to bring more fun and function to daily life. Infinix’s vision aims to deliver AI innovations that are not only powerful, but also fun and accessible. At the centre of this initiative is Infinix AI∞, a system designed to seamlessly integrate into everyday life and enhance creativity, productivity, and entertainment. Tony Zhao, General Manager of Infinix, said: “With the launch of Infinix’s ‘Gen Beta’ AI and the latest flagship NOTE 50 Series equipped with the all-new One-Tap Infinix AI∞ features, we aim to empower all Infinix users and allow them to experience AI-driven innovation first-hand. This is more than just a product upgrade—it’s a technological revolution that encourages young people to explore, experiment, and push the boundaries of AI’s role in daily life. “We believe AI should be fun, intuitive, and deeply embedded in the way we live, work, and create.” Infinix is committed to making AI a central part of the user experience, moving beyond mere functionality to create engaging, AI-powered entertainment. The company’s AI∞ Lab is focused on developing AI-driven gaming, immersive entertainment, and interactive social experiences. Later this year, Infinix’s XOS mobile operating system will be upgraded to AIXOS Beta—a self-learning system that integrates AI to enhance various aspects of daily life. The AIXOS update brings a fresh visual overhaul. Users can look forward to newly-designed stock app icons and a ‘TransSans’ system font which boasts unlimited font weight variation to perfectly suit individual preferences and visual needs. Beyond aesthetics, AIXOS Beta is set to deliver a noticeably smoother user experience. The new ‘Flash App’ launch feature is based on redesigned animation curves that mimic the natural world. This aims to eliminate those frustrating blank loading moments when opening applications, resulting in a more fluid and comfortable feel. Furthermore, the enhanced ‘Sensory Scheduling 2.0’, powered by AI, will intelligently identify usage scenarios and prioritise the processing of related background tasks. Infinix claims this will lead to a tangible improvement in app startup speeds, with internal testing showing a 20% increase over two rounds of continuous startup for 25 applications. Finally, to ensure your device continues to run optimally over time, AIXOS Beta incorporates a new Memory Defragmentation feature. This will efficiently organise the phone’s memory, helping to maintain long-term system performance and prevent slowdowns. The NOTE 50 Series launch event itself was a notable departure from industry norms, being the brand’s first global launch event in a mobile-friendly vertical format on platforms like TikTok and Facebook. This approach demonstrates how Infinix is engaging with young audiences in modern, creative, and immersive ways. Streamlining interaction and improving daily life Infinix AI∞ streamlines user interaction through the One-Tap Infinix AI∞ feature. This feature simplifies tasks like identifying landmarks, translating menus, and answering queries with a single tap. The One-Tap Query function supports over 1,000 different scenarios and showcases AI’s potential to simplify and enhance daily life. AI-powered document summarisation, email replies, and smart writing features further boost productivity. Infinix AI∞ also empowers creativity. AI Eraser removes unwanted objects from photos, AI Cutout enables easy image extraction, AI Wallpaper Generator creates personalised wallpapers, and AI Mosaic enhances data security by blurring private information in screenshots. AI Writing assists with text composition, and AI Note transforms sketches into AI-generated art. Infinix AI ∞ | Get In Now with Infinix AI ∞ | Launch Communication is also enhanced through Infinix AI∞. The Real-Time Call Translator feature provides bidirectional translation for voice calls, AI Auto Answer filters calls, Speech Enhancement reduces noise, and Call Summary records key details. To assist with proactive health management, Infinix AI∞ integrates heart rate and blood oxygen monitoring features directly into the device (Note 50 Pro and Pro+ exclusive). AI levels up the gaming experience As the official mobile brand for numerous esports events for three years, Infinix is focused on using AI to create a high-performance gaming ecosystem. Beyond hardware specifications like a super-smooth 144Hz refresh rate display and chipsets capable of supporting gameplay at up to 120 frames per second, the NOTE 50 Series also boasts a suite of intelligent features: A simple long press on the power button activates One-Tap Infinix AI∞, instantly enabling an enhanced Folax for Esports Mode, Power Saving Mode, convenient floating windows, and more—all designed to ensure uninterrupted gameplay. ZoneTouch Master intelligently fine-tunes on-screen controls based on individual user habits, leading to greater precision. XBOOST AI-powered features act as an intelligent “game coach”, offering tailored guidance to enhance performance. The ‘AI Magic Box’ streamlines the gaming experience by accelerating in-game dialogues and automatically collecting items, allowing players to stay engrossed in the narrative while concentrating on crucial battles and exploration. To prevent frustrating interruptions, AI Smart Thermal Control intelligently adapts to different gaming scenarios to manage heat effectively. For a bit of added fun, the Magic Voice Changer even allows for customisable voice styles. Features driven by the XBOOST AI game engine will be rolling out in the XOS Beta, so keep an eye out for the latest updates. Infinix AI∞ to expand beyond smartphones The NOTE 50 Series is Infinix’s first flagship smartphone line after the launch of Infinix AI∞, showcasing the One-Tap AI∞ function. However, Infinix’s AI vision extends beyond smartphones to a comprehensive AIoT ecosystem. Infinix is currently developing new ecosystem products like AI Buds and AI Ring that are all designed to work seamlessly with Infinix AI∞. The AI Ring combines fashion and technology with AI-powered health tracking, while the AI Buds – launching on Indiegogo – will offer real-time translation in 162 languages, hybrid ANC, and a touchscreen charging case. With the NOTE 50 Series and Infinix AI∞, Infinix is ushering in the ‘Gen Beta’ era and encouraging a new generation to embrace AI. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events

OpenAI pulls free GPT-4o image generator after one day

OpenAI has pulled its upgraded image generation feature, powered by the advanced GPT-4o reasoning model, from the free tier of ChatGPT. The decision comes just a day after the update was launched, following an unforeseen surge in users creating images in the distinctive style of renowned Japanese animation house, Studio Ghibli. The update, which promised to deliver enhanced realism in both AI-generated images and text, was intended to showcase the capabilities of GPT-4o.  This new model employs an “autoregressive approach” to image creation, building visuals from left to right and top to bottom, a method that contrasts with the simultaneous generation employed by older models. This technique is designed to improve the accuracy and lifelike quality of the imagery produced. Furthermore, the new model generates sharper and more coherent text within images, addressing a common shortcoming of previous AI models which often resulted in blurry or nonsensical text.  OpenAI also conducted post-launch training, guided by human feedback, to identify and rectify common errors in both text and image outputs. However, the public response to the image generation upgrade took an unexpected turn almost immediately after its release on ChatGPT.  Users embraced the ability to create images in the iconic style of Studio Ghibli, sharing their imaginative creations across various social media platforms. These included reimagined scenes from classic films like “The Godfather” and “Star Wars,” as well as popular internet memes such as “distracted boyfriend” and “disaster girl,” all rendered with the aesthetic of the beloved animation studio. Even OpenAI CEO Sam Altman joined in on the fun, changing his X profile picture to a Studio Ghibli-esque rendition of himself: However, later that day, Altman posted on X announcing a temporary delay in the rollout of the image generator update for free ChatGPT users. While paid subscribers to ChatGPT Plus, Pro, and Team continue to have access to the feature, Altman provided no specific timeframe for when the functionality would return to the free tier. images in chatgpt are wayyyy more popular than we expected (and we had pretty high expectations). rollout to our free tier is unfortunately going to be delayed for awhile. — Sam Altman (@sama) March 26, 2025 The virality of the Studio Ghibli-style images seemingly prompted OpenAI to reconsider its rollout strategy. While the company had attempted to address ethical and legal considerations surrounding AI image generation, the sheer volume and nature of the user-generated content appear to have caught them off-guard. The intersection of AI-generated art and intellectual property rights is a complex and often debated area. Style is not historically considered as being protected by copyright law in the same respect as specific works. Despite this legal nuance, OpenAI’s swift decision to withdraw the GPT-4o image generation feature from its free tier suggests a cautious approach. The company appears to be taking a step back to evaluate the situation and determine its next course of action in light of the unexpected popularity of Ghibli-inspired AI art. OpenAI’s decision to roll back the deployment of its latest image generation feature underscores the ongoing uncertainty around not just copyright law, but also the ethical implications of using AI to replicate human creativity. (Photo by Kai Pilger) See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

Chinese AI innovation narrows technology divide with the US

Chinese AI innovation is reshaping the global technology landscape, challenging assumptions about Western dominance in advanced computing. Recent developments from companies like DeepSeek illustrate how quickly China has adapted to and overcome international restrictions through creative approaches to AI development. According to Lee Kai-fu, CEO of Chinese startup 01.AI and former head of Google China, the gap between Chinese and American AI capabilities has narrowed dramatically. “Previously, I think it was a six to nine-month gap and behind in everything. And now I think that’s probably three months behind in some of the core technologies, but ahead in some specific areas,” Lee toldReutersin a recent interview. DeepSeek has emerged as the poster child for this new wave of Chinese AI innovation. On January 20, 2025, as Donald Trump was inaugurated as US President, DeepSeek quietly launched its R1 model. The low-cost, open-source large language model reportedly rivals or surpasses OpenAI’s ChatGPT-4, yet was developed at a fraction of the cost. Algorithmic efficiency over hardware superiority What makes DeepSeek’s achievements particularly significant is how they’ve been accomplished despite restricted access to the latest silicon. Rather than being limited by US export controls, Chinese AI innovation has flourished by instead focusing on algorithmic efficiency and novel approaches to model architecture. Different aspects of this innovative approach were demonstrated further when DeepSeek released an upgraded V3 model on March 25, 2025. The DeepSeek-V3-0324 features enhanced reasoning capabilities and improved performance in multiple benchmarks. The model showed particular strength in mathematics, scoring 59.4 on the American Invitational Mathematics Examination (AIME) compared to its predecessor’s 39.6. It also improved by 10 points on LiveCodeBench to 49.2. Häme University lecturer Kuittinen Petri noted on social media platform X that “DeepSeek is doing all this with just [roughly] 2% [of the] money resources of OpenAI.” When he prompted the new model to create a responsive front page for an AI company, it produced a fully functional, mobile-friendly website with just 958 lines of code. Market reactions and global impact The financial markets have noticed the shift in the AI landscape. When DeepSeek launched its R1 model in January, America’s Nasdaq plunged 3.1%, while the S&P 500 fell 1.5% – an indication that investors recognise the potential impact of Chinese AI innovation on established Western tech companies. The developments present opportunities and challenges for the broader global community. China’s focus on open-source, cost-effective models could democratise access to advanced AI capabilities for emerging economies. Both China and the US are making massive investments in AI infrastructure. The Trump administration has unveiled its $500 billion Stargate Project, and China projects investments of more than 10 trillion yuan (US$1.4 trillion) in technology by 2030. Supply chain complexities and environmental considerations The evolving AI landscape creates new geopolitical complexities. Countries like South Korea highlight the situation. As the world’s second-largest producer of semiconductors, Korea became more dependent on China in 2023 for five of the six most important raw materials needed for chipmaking. Companies like Toyota, SK Hynix, Samsung, and LG Chem remain vulnerable due to China’s supply chain dominance. As AI development accelerates, environmental implications also loom. According to the think tank, the Institute for Progress, maintaining AI leadership will require the United States to build five gigawatt computing clusters in five years. By 2030, data centres could consume 10% of US electricity, more than double the 4% recorded in 2023. Similarly, Greenpeace East Asia estimates that China’s digital infrastructure electricity consumption will surge by 289% by 2035. The path forward in AI development DeepSeek’s emergence has challenged assumptions about the effectiveness of technology restrictions. As Lee Kai-fu observed, Washington’s semiconductor sanctions were a “double-edged sword” that created short-term challenges but ultimately forced Chinese firms to innovate under constraints. Jasper Zhang, a mathematics Olympiad gold medalist with a doctoral degree from the University of California, Berkeley, tested DeepSeek-V3-0324 with an AIME 2025 problem and reported that “it solved it smoothly.” Zhang expressed confidence that “open-source AI models will win in the end,” adding that his startup Hyperbolic now supports the new model on its cloud platform. Industry experts are now speculating that DeepSeek may release its R2 model ahead of schedule. Li Bangzhu, founder of AIcpb.com, a website tracking the popularity of AI applications, noted that “the coding capabilities are much stronger, and the new version may pave the way for the launch of R2.” R2 is slated for an early May release, according toReuters. Both nations are pushing the boundaries of what’s possible. The implications extend beyond their borders to impact global economics, security, and environmental policy. (Image credit: engin akyurt/Unsplash) See also: US-China tech war escalates with new AI chips export controls Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here Source link

Google cooks up its ‘most intelligent’ AI model to date

Gemini 2.5 is being hailed by Google DeepMind as its “most intelligent AI model” to date. The first model from this latest generation is an experimental version of Gemini 2.5 Pro, which DeepMind says has achieved state-of-the-art results across a wide range of benchmarks. According to Koray Kavukcuoglu, CTO of Google DeepMind, the Gemini 2.5 models are “thinking models”.  This signifies their capability to reason through their thoughts before generating a response, leading to enhanced performance and improved accuracy.     The capacity for “reasoning” extends beyond mere classification and prediction, Kavukcuoglu explains. It encompasses the system’s ability to analyse information, deduce logical conclusions, incorporate context and nuance, and ultimately, make informed decisions. DeepMind has been exploring methods to enhance AI’s intelligence and reasoning capabilities for some time, employing techniques such as reinforcement learning and chain-of-thought prompting. This groundwork led to the recent introduction of their first thinking model, Gemini 2.0 Flash Thinking.     “Now, with Gemini 2.5,” says Kavukcuoglu, “we’ve achieved a new level of performance by combining a significantly enhanced base model with improved post-training.” Google plans to integrate these thinking capabilities directly into all of its future models—enabling them to tackle more complex problems and support more capable, context-aware agents.     Gemini 2.5 Pro secures the LMArena leaderboard top spot Gemini 2.5 Pro Experimental is positioned as DeepMind’s most advanced model for handling intricate tasks. As of writing, it has secured the top spot on the LMArena leaderboard – a key metric for assessing human preferences – by a significant margin, demonstrating a highly capable model with a high-quality style: Gemini 2.5 is a ‘pro’ at maths, science, coding, and reasoning Gemini 2.5 Pro has demonstrated state-of-the-art performance across various benchmarks that demand advanced reasoning. Notably, it leads in maths and science benchmarks – such as GPQA and AIME 2025 – without relying on test-time techniques that increase costs, like majority voting. It also achieved a state-of-the-art score of 18.8% on Humanity’s Last Exam, a dataset designed by subject matter experts to evaluate the human frontier of knowledge and reasoning. DeepMind has placed significant emphasis on coding performance, and Gemini 2.5 represents a substantial leap forward compared to its predecessor, 2.0, with further improvements in the pipeline. 2.5 Pro excels in creating visually compelling web applications and agentic code applications, as well as code transformation and editing. On SWE-Bench Verified, the industry standard for agentic code evaluations, Gemini 2.5 Pro achieved a score of 63.8% using a custom agent setup. The model’s reasoning capabilities also enable it to create a video game by generating executable code from a single-line prompt. Building on its predecessors’ strengths Gemini 2.5 builds upon the core strengths of earlier Gemini models, including native multimodality and a long context window. 2.5 Pro launches with a one million token context window, with plans to expand this to two million tokens soon. This enables the model to comprehend vast datasets and handle complex problems from diverse information sources, spanning text, audio, images, video, and even entire code repositories.     Developers and enterprises can now begin experimenting with Gemini 2.5 Pro in Google AI Studio. Gemini Advanced users can also access it via the model dropdown on desktop and mobile platforms. The model will be rolled out on Vertex AI in the coming weeks.     Google DeepMind encourages users to provide feedback, which will be used to further enhance Gemini’s capabilities. (Photo by Anshita Nair) See also: DeepSeek V3-0324 tops non-reasoning AI models in open-source first Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

eDiscovery given a boost by AI for the pharmaceutical industry

In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex. In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local and federal levels. It’s no surprise, therefore, that it’s in regulated supply chain compliance that AI can be hugely beneficial. Given that AIs excel at reading and parsing documentation and images, service providers like Lighthouse AI use the technology in its different forms to comb through existing and new documentation that governs the industry. The company’s latest suite, Lighthouse AI for Review uses the variations on machine learning of predictive and generative AI, image recognition and OCR, plus linguistic modelling, to handle use cases in large volume, time-sensitive settings. Predictive AI is used for classification of documents and generative AI helps with the review process for better, more defensible, downstream results. The company claims that the linguistic modelling element of the suite refines the platform’s accuracy to levels normally “beyond AI’s capabilities.” eDiscovery – the broad term Lighthouse AI is currently six years old, and has analysed billions of documents since 2019, but predictive AI remains important to the software, despite – it might be said – generative AI grabbing most of the headlines in the last 18 months. Fernando Delgado, Director of AI and Analytics at Lighthouse, said, “While much attention has been rightly paid to the impact of GenAI recently, the power and relevancy of predictive AI cannot be overlooked. They do different things, and there is often real value in combining them to handle different elements in the same workflow.” Given that the blanket term ‘the pharmaceutical industry’ includes concerns as disparate as medical technology, drug research, and production, right through to dispensing stores, the compliance requirements for an individual company in the sector can be wildly varied. “Rather than a one-size-fits-all approach, we’ve been able to shape the technology to fit our unique needs – turning our ideas into real, impactful solutions,” says Christian Mahoney, Counsel at Cleary Gottlieb Steen & Hamilton. Lighthouse AI for Review includes use cases including AI for Responsive Review, AI for Privilege Review, AI for Privilege Analysis, and AI for PII/PHI/PCI Identification. The Lighthouse AI claims that its users see an up to 40% reduction in the volume of classification and summary documents with the AI for Responsive Review feature, with less training required by the LLM before it begins to create ROI.AI Privilege for Review is also “60% more accurate than keyword-based models,” Lighthouse AI says. AI’s acuity with visual data is handled by AI for Image Analysis uses GenAI to analyse images and, for example, produce text descriptions of media, presenting results using the interface users interact with for other tasks. Lighthouse’s AI for PII/PHI/PCI Identification automates the mapping of relationships between entities, and can reduce the need for manual reviews. “The new offerings are highly differentiated and designed to provide the most impact for the volume, velocity, and complexity of eDiscovery,” said Lighthouse CEO, Ron Markezich. (Image source: “Basel – Roche Building 1” by corno.fulgur75 is licensed under CC BY 2.0.) See also: Hugging Face calls for open-source focus in the AI Action Plan Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

ARC Prize launches its toughest AI benchmark yet: ARC-AGI-2

ARC Prize has launched the hardcore ARC-AGI-2 benchmark, accompanied by the announcement of their 2025 competition with $1 million in prizes. As AI progresses from performing narrow tasks to demonstrating general, adaptive intelligence, the ARC-AGI-2 challenges aim to uncover capability gaps and actively guide innovation. “Good AGI benchmarks act as useful progress indicators. Better AGI benchmarks clearly discern capabilities. The best AGI benchmarks do all this and actively inspire research and guide innovation,” the ARC Prize team states. ARC-AGI-2 is setting out to achieve the “best” category. Beyond memorisation Since its inception in 2019, ARC Prize has served as a “North Star” for researchers striving toward AGI by creating enduring benchmarks.  Benchmarks like ARC-AGI-1 leaned into measuring fluid intelligence (i.e., the ability to adapt learning to new unseen tasks.) It represented a clear departure from datasets that reward memorisation alone. ARC Prize’s mission is also forward-thinking, aiming to accelerate timelines for scientific breakthroughs. Its benchmarks are designed not just to measure progress but to inspire new ideas. Researchers observed a critical shift with the debut of OpenAI’s o3 in late 2024, evaluated using ARC-AGI-1. Combining deep learning-based large language models (LLMs) with reasoning synthesis engines, o3 marked a breakthrough where AI transitioned beyond rote memorisation. Yet, despite progress, systems like o3 remain inefficient and require significant human oversight during training processes. To challenge these systems for true adaptability and efficiency, ARC Prize introduced ARC-AGI-2. ARC-AGI-2: Closing the human-machine gap The ARC-AGI-2 benchmark is tougher for AI yet retains its accessibility for humans. While frontier AI reasoning systems continue to score in single-digit percentages on ARC-AGI-2, humans can solve every task in under two attempts. So, what sets ARC-AGI apart? Its design philosophy chooses tasks that are “relatively easy for humans, yet hard, or impossible, for AI.” The benchmark includes datasets with varying visibility and the following characteristics: Symbolic interpretation: AI struggles to assign semantic significance to symbols, instead focusing on shallow comparisons like symmetry checks. Compositional reasoning: AI falters when it needs to apply multiple interacting rules simultaneously. Contextual rule application: Systems fail to apply rules differently based on complex contexts, often fixating on surface-level patterns. Most existing benchmarks focus on superhuman capabilities, testing advanced, specialised skills at scales unattainable for most individuals.  ARC-AGI flips the script and highlights what AI can’t yet do; specifically the adaptability that defines human intelligence. When the gap between tasks that are easy for humans but difficult for AI eventually reaches zero, AGI can be declared achieved. However, achieving AGI isn’t limited to the ability to solve tasks; efficiency – the cost and resources required to find solutions – is emerging as a crucial defining factor. The role of efficiency Measuring performance by cost per task is essential to gauge intelligence as not just problem-solving capability but the ability to do so efficiently. Real-world examples are already showing efficiency gaps between humans and frontier AI systems: Human panel efficiency: Passes ARC-AGI-2 tasks with 100% accuracy at $17/task. OpenAI o3: Early estimates suggest a 4% success rate at an eye-watering $200 per task. These metrics underline disparities in adaptability and resource consumption between humans and AI. ARC Prize has committed to reporting on efficiency alongside scores across future leaderboards. The focus on efficiency prevents brute-force solutions from being considered “true intelligence.” Intelligence, according to ARC Prize, encompasses finding solutions with minimal resources—a quality distinctly human but still elusive for AI. ARC Prize 2025 ARC Prize 2025 launches on Kaggle this week, promising $1 million in total prizes and showcasing a live leaderboard for open-source breakthroughs. The contest aims to drive progress toward systems that can efficiently tackle ARC-AGI-2 challenges.  Among the prize categories, which have increased from 2024 totals, are: Grand prize: $700,000 for reaching 85% success within Kaggle efficiency limits. Top score prize: $75,000 for the highest-scoring submission. Paper prize: $50,000 for transformative ideas contributing to solving ARC-AGI tasks. Additional prizes: $175,000, with details pending announcements during the competition. These incentives ensure fair and meaningful progress while fostering collaboration among researchers, labs, and independent teams. Last year, ARC Prize 2024 saw 1,500 competitor teams, resulting in 40 papers of acclaimed industry influence. This year’s increased stakes aim to nurture even greater success. ARC Prize believes progress hinges on novel ideas rather than merely scaling existing systems. The next breakthrough in efficient general systems might not originate from current tech giants but from bold, creative researchers embracing complexity and curious experimentation. (Image credit: ARC Prize) See also: DeepSeek V3-0324 tops non-reasoning AI models in open-source first Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

My cart
Your cart is empty.

Looks like you haven't made a choice yet.