Kay Firth-Butterfield, formerly WEF: The future of AI, the metaverse and digital transformation

Kay Firth-Butterfield is a globally recognised leader in ethical artificial intelligence and a distinguished AI ethics speaker. As the former Head of AI and Machine Learning at the World Economic Forum (WEF) and one of the foremost voices in AI governance, she has spent her career advocating for technology that enhances, rather than harms, society. We spoke to Kay to discuss the promise and pitfalls of generative AI, the future of the Metaverse, and how organisations can prepare for a decade of unprecedented digital transformation. Generative AI has captured global attention, but there’s still a great deal of misunderstanding around what it actually is. Could you walk us through what defines generative AI, how it works, and why it’s considered such a transformative evolution of artificial intelligence? It’s very exciting because it represents the next iteration of artificial intelligence. What generative AI allows you to do is ask questions of the world’s data simply by typing a prompt. If we think back to science fiction, that’s essentially what we’ve always dreamed of — just being able to ask a computer a question and have it draw on all its knowledge to provide an answer. How does it do that? Well, it predicts which word is likely to come next in a sequence. It does this by accessing enormous volumes of data. We refer to these as large language models. Essentially, the machine ‘reads’ — or at least accesses — all the data available on the open web. In some cases, and this is an area of legal contention, it also accesses IP-protected and copyrighted material. We can expect a great deal of legal debate in this space. Once the model has ingested all this data, it begins to predict what word naturally follows another, enabling it to construct highly complex and nuanced responses. Anyone who has experimented with it knows that it can return some surprisingly eloquent and insightful content simply through this predictive capability. Of course, sometimes it gets things wrong. In the AI community, we call this ‘hallucination’ — essentially, the system fabricates information. That’s a serious issue because in order to rely on AI-generated outputs, we need to reach a point where we can trust the responses. The problem is, once a hallucination enters the data pool, it can be repeated and reinforced by the model. While much has been said about generative AI’s technical potential, what do you see as the most meaningful societal and business benefits it offers? And what challenges must we address to ensure these advantages are equitably realised? AI is now accessible to everyone, and that’s incredibly powerful. It’s a hugely democratising tool. It means that small and medium-sized enterprises, which previously couldn’t afford to leverage AI, now can. However, we also need to be aware that most of the world’s data is created in the United States first, followed by Europe and China. There are clear challenges regarding the datasets these large language models are trained on. They’re not truly using ‘global’ data. They’re working with a limited subset. That has led to discussions around digital colonisation, where content generated from American and European data is projected onto the rest of the world, with an implicit expectation that others will adopt and use it. Different cultures, of course, require different responses. So, while there are countless benefits to generative AI, there are also significant challenges that we must address if we want to ensure fair and inclusive outcomes. The Metaverse has seen both hype and hesitation in recent years. From your perspective, what is the current trajectory of the Metaverse, and how do you see its role evolving within business environments over the next five years? It’s interesting. We went through a phase of huge excitement around the Metaverse, where everyone wanted to be involved. But now we’ve entered more of a Metaverse winter, or perhaps autumn, as it’s become clear just how difficult it is to create compelling content for these immersive spaces. We’re seeing strong use cases in industrial applications, but we’re still far from achieving that Ready Player One vision — where we live, shop, buy property, and fully interact in 3D virtual environments. That’s largely because the level of compute power and creative resources needed to build truly immersive experiences is enormous. In five years’ time, I think we’ll start to see the Metaverse delivering on more of its promises for business. Customers may enjoy exceptional shopping experiences—entering virtual stores rather than simply browsing online, where they can ‘feel’ fabrics virtually and make informed decisions in real time. We may also see remote working evolve, where employees collaborate inside the Metaverse as if they were in the same room. One study found that younger workers often lack adequate supervision when working remotely. In a Metaverse setting, you could offer genuine, interactive supervision and mentorship. It may also help with fostering colleague relationships that are often missed in remote work settings. Ultimately, the Metaverse removes physical constraints and offers new ways of working and interacting—but we’ll need balance. Many people may not want to spend all their time in fully immersive environments. Looking ahead, which emerging technologies and AI-driven trends do you anticipate will have the most profound global impact over the next decade. And how should we be preparing for their implications, both economically and ethically? That’s a great question. It’s a bit like pulling out a crystal ball. But without doubt, generative AI is one of the most significant shifts we’re seeing today. As the technology becomes more refined, it will increasingly power new AI applications through natural language interactions. Natural Language Processing (NLP) is the AI term for the machine’s ability to understand and interpret human language. In the near future, only elite developers will need to code manually. The rest of us will interact with machines by typing or speaking requests. These systems will not only provide answers, but also write code on our behalf. It’s incredibly powerful, transformative technology. But there are

AI financial planning should be limited to short-term decisions

Research conducted by Vlerick Business School has discovered that in the area of AI financial planning, the technology consistently outperforms humans when allocating budgets with strategic guidelines in place. Businesses that use AI for budgeting processes experience substantial improvements in the accuracy and efficiency of budgeting plans compared to human decision-making. The study’s goal was to interpret AI’s role in corporate budgeting, examining how well such technology performs when making financial decisions. Ultimately, it’s an investigation into whether AI’s financial decisions align with a company’s long-term strategies and how its decisions compare to human management. The researchers, Kristof Stouthuysen, Professor of Management Accounting and Digital Finance at Vlerick Business School, and PhD researcher, Emma Willems, studied tactical and strategic budgeting approaches. Tactical budgeting is about quick, responsive decisions, referring to short-term, data-driven financial decisions. These are aimed at improving immediate performance, like making adjustments to spending based on market trends. Strategic budgeting typically involves a more comprehensive approach that focuses on future planning, aligning various resources with a business’s vision. According to the research, AI is superior when performing tactical budgeting processes like cost management and resource allocation. However, the need for human insight remains important to ensure accurate and strategic financial planning over the long term. The controlled experiment was achieved by running a management simulation where experienced managers were asked to allocate budgets for a hypothetical automotive parts company. Stouthuysen and Willems then compared these human-made decisions to those produced by an AI algorithm using the same financial data. The results concluded that AI was superior in optimising budgets when a company’s strategic financial planning was clearly defined. However, AI struggled to make budgeting decisions when key performance indicators (KPIs) did not align with the company’s financial goals. Stouthuysen and Willems work on the study emphasised the importance of a collaboration between humans and AI. “As AI continues to evolve, companies that use its strengths in tactical budgeting while maintaining human oversight in strategic planning will gain a competitive edge. The key is knowing where AI should lead and where human intuition remains indispensable.” According to the study, AI can theoretically take over from humans when it comes to tactical budgeting, providing more precise and efficient outcomes. Stouthuysen and Willems believe companies need to define their strategic priorities clearly and implement AI for tactical budget-making decisions to maximise financial performances and achieve sustainable growth. The findings challenge the widespread misconception that AI can completely substitute the need for humans in budgeting. Instead, this research emphasises the importance of taking a balanced approach, utilising both AI and humans, assigning tasks to silicon or human processes according to their proven abilities. (Image source: “Payday” by 401(K) 2013 is licensed under CC BY-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

Tony Blair Institute AI copyright report sparks backlash

The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead. The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.” Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it. “AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states. However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.” The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes. The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent. The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.” However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.” The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.” According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing. The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI. The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions. In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.” The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces. Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems. The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre. However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include: The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case. The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out. The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two. The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI. Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities. A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour. Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.” Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators. Adding to these criticisms, British novelist and author

Study claims OpenAI trains AI models on copyrighted data

A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books. The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets. The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions. Key findings from the report include: GPT-4o shows “strong recognition” of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%) GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively) GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores) GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%) The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data. The study highlights the potential for “temporal bias” in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period. The report notes that while the evidence is specific to OpenAI and O’Reilly Media books, it likely reflects a systemic issue around the use of copyrighted data. It argues that uncompensated training data usage could lead to a decline in the internet’s content quality and diversity, as revenue streams for professional content creation diminish. The AI Disclosures Project emphasises the need for stronger accountability in AI companies’ model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration. The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders know when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data. Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information. The report concludes by stating that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data. (Image by Sergei Tokmakov) See also: Anthropic provides insights into the ‘AI biology’ of Claude Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

A step towards smarter, web-native AI agents

Amazon has introduced Nova Act, an advanced AI model engineered for smarter agents that can execute tasks within web browsers. While large language models popularised the concept of “agents” as tools that answer queries or retrieve information via methods such as Retrieval-Augmented Generation (RAG), Amazon envisions something more robust. The company defines agents not just as responders but as entities capable of performing tangible, multi-step tasks in diverse digital and physical environments. “Our dream is for agents to perform wide-ranging, complex, multi-step tasks like organising a wedding or handling complex IT tasks to increase business productivity,” said Amazon. Current market offerings often fall short, with many agents requiring continuous human supervision and their functionality dependent on comprehensive API integration—something not feasible for all tasks. Nova Act is Amazon’s answer to these limitations. Alongside the model, Amazon is releasing a research preview of the Amazon Nova Act SDK. Using the SDK, developers can create agents capable of automating web tasks like submitting out-of-office notifications, scheduling calendar holds, or enabling automatic email replies. The SDK aims to break down complex workflows into dependable “atomic commands” such as searching, checking out, or interacting with specific interface elements like dropdowns or popups. Detailed instructions can be added to refine these commands, allowing developers to, for instance, instruct an agent to bypass an insurance upsell during checkout. To further enhance accuracy, the SDK supports browser manipulation via Playwright, API calls, Python integrations, and parallel threading to overcome web page load delays. Nova Act: Exceptional performance on benchmarks Unlike other generative models that showcase middling accuracy on complex tasks, Nova Act prioritises reliability. Amazon highlights its model’s impressive scores of over 90% on internal evaluations for specific capabilities that typically challenge competitors.  Nova Act achieved a near-perfect 0.939 on the ScreenSpot Web Text benchmark, which measures natural language instructions for text-based interactions, such as adjusting font sizes. Competing models such as Claude 3.7 Sonnet (0.900) and OpenAI’s CUA (0.883) trail behind by significant margins. Similarly, Nova Act scored 0.879 in the ScreenSpot Web Icon benchmark, which tests interactions with visual elements like rating stars or icons. While the GroundUI Web test, designed to assess an AI’s proficiency in navigating various user interface elements, showed Nova Act slightly trailing competitors, Amazon sees this as an area ripe for improvement as the model evolves. Amazon stresses its focus on delivering practical reliability. Once an agent built using Nova Act functions as expected, developers can deploy it headlessly, integrate it as an API, or even schedule it to run tasks asynchronously. In one demonstrated use case, an agent automatically orders a salad for delivery every Tuesday evening without requiring ongoing user intervention. Amazon sets out its vision for scalable and smart AI agents One of Nova Act’s standout features is its ability to transfer its user interface understanding to new environments with minimal additional training. Amazon shared an instance where Nova Act performed admirably in browser-based games, even though its training had not included video game experiences. This adaptability positions Nova Act as a versatile agent for diverse applications. This capability is already being leveraged in Amazon’s own ecosystem. Within Alexa+, Nova Act enables self-directed web navigation to complete tasks for users, even when API access is not comprehensive enough. This represents a step towards smarter AI assistants that can function independently, harnessing their skills in more dynamic ways. Amazon is clear that Nova Act represents the first stage in a broader mission to craft intelligent, reliable AI agents capable of handling increasingly complex, multi-step tasks.  Expanding beyond simple instructions, Amazon’s focus is on training agents through reinforcement learning across varied, real-world scenarios rather than overly simplistic demonstrations. This foundational model serves as a checkpoint in a long-term training curriculum for Nova models, indicating the company’s ambition to reshape the AI agent landscape. “The most valuable use cases for agents have yet to be built,” Amazon noted. “The best developers and designers will discover them. This research preview of our Nova Act SDK enables us to iterate alongside these builders through rapid prototyping and iterative feedback.” Nova Act is a step towards making AI agents truly useful for complex, digital tasks. From rethinking benchmarks to emphasising reliability, its design philosophy is centred around empowering developers to move beyond what’s possible with current-generation tools.  See also: Anthropic provides insights into the ‘AI biology’ of Claude Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

How debugging and data lineage techniques can protect Gen AI investments

As the adoption of AI accelerates, organisations may overlook the importance of securing their Gen AI products. Companies must validate and secure the underlying large language models (LLMs) to prevent malicious actors from exploiting these technologies. Furthermore, AI itself should be able to recognise when it is being used for criminal purposes. Enhanced observability and monitoring of model behaviours, along with a focus on data lineage can help identify when LLMs have been compromised. These techniques are crucial in strengthening the security of an organisation’s Gen AI products. Additionally, new debugging techniques can ensure optimal performance for those products. It’s important, then, that given the rapid pace of adoption, organisations should take a more cautious approach when developing or implementing LLMs to safeguard their investments in AI. Establishing guardrails The implementation of new Gen AI products significantly increases the volume of data flowing through businesses today. Organisations must be aware of the type of data they provide to the LLMs that power their AI products and, importantly, how this data will be interpreted and communicated back to customers. Due to their non-deterministic nature, LLM applications can unpredictably “hallucinate”, generating inaccurate, irrelevant, or potentially harmful responses. To mitigate this risk, organisations should establish guardrails to prevent LLMs from absorbing and relaying illegal or dangerous information. Monitoring for malicious intent It’s also crucial for AI systems to recognise when they are being exploited for malicious purposes. User-facing LLMs, such as chatbots, are particularly vulnerable to attacks like jailbreaking, where an attacker issues a malicious prompt that tricks the LLM into bypassing the moderation guardrails set by its application team. This poses a significant risk of exposing sensitive information. Monitoring model behaviours for potential security vulnerabilities or malicious attacks is essential. LLM observability plays a critical role in enhancing the security of LLM applications. By tracking access patterns, input data, and model outputs, observability tools can detect anomalies that may indicate data leaks or adversarial attacks. This allows data scientists and security teams proactively identify and mitigate security threats, protecting sensitive data, and ensuring the integrity of LLM applications. Validation through data lineage The nature of threats to an organisation’s security – and that of its data – continues to evolve. As a result, LLMs are at risk of being hacked and being fed false data, which can distort their responses. While it’s necessary to implement measures to prevent LLMs from being breached, it is equally important to closely monitor data sources to ensure they remain uncorrupted. In this context, data lineage will play a vital role in tracking the origins and movement of data throughout its lifecycle. By questioning the security and authenticity of the data, as well as the validity of the data libraries and dependencies that support the LLM, teams can critically assess the LLM data and accurately determine its source. Consequently, data lineage processes and investigations will enable teams to validate all new LLM data before integrating it into their Gen AI products. A clustering approach to debugging Ensuring the security of AI products is a key consideration, but organisations must also maintain ongoing performance to maximise their return on investment. DevOps can use techniques such as clustering, which allows them to group events to identify trends, aiding in the debugging of AI products and services. For instance, when analysing a chatbot’s performance to pinpoint inaccurate responses, clustering can be used to group the most commonly asked questions. This approach helps determine which questions are receiving incorrect answers. By identifying trends among sets of questions that are otherwise different and unrelated, teams can better understand the issue at hand. A streamlined and centralised method of collecting and analysing clusters of data, the technique helps save time and resources, enabling DevOps to drill down to the root of a problem and address it effectively. As a result, this ability to fix bugs both in the lab and in real-world scenarios improves the overall performance of a company’s AI products. Since the release of LLMs like GPT, LaMDA, LLaMA, and several others, Gen AI has quickly become more integral to aspects of business, finance, security, and research than ever before. In their rush to implement the latest Gen AI products, however, organisations must remain mindful of security and performance. A compromised or bug-ridden product could be, at best, an expensive liability and, at worst, illegal and potentially dangerous. Data lineage, observability, and debugging are vital to the successful performance of any Gen AI investment.   Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Source link

GITEX GLOBAL in Asia: the largest tech show in the world

23-25 April 2025 | Marina Bay Sands, Singapore GITEX ASIA 2025 will bring together 700+ tech companies, featuring 400+ startups and digital promotion agencies, and 250+ global investors & VCs from 60+ countries. The event will serve as a bridge between the Eastern and Western technology ecosystems and feature 180+ hours of expert insights from 220 global thought leaders. GITEX ASIA 2025 is set to foster cross-border collaboration, investment, and innovation, connecting global tech enterprises, unicorn founders, policymakers, SMEs, and academia to shape the future of digital transformation in Asia. GITEX ASIA 2025 will comprise of five co-located events: AI EVERTYTHING SINGAPORE – the AI showcase. NORTHSTAR ASIA – for startups and investors GITEX CYBER VALLEY ASIA – helping create a defence ecosystem for governments and businesses GITEX QUANTUM QUANTUM EXPO ASIA – Asia’s quantum frontier GITEX DIGI HEALTH & BIOTECH SINGAPORE – the healthcare revolution GITEX ASIA 2025 will host a lineup of conferences and summits, exploring a range of transformative trends in technology and investment. Key themes will include AI, cloud & connectivity, cybersecurity, quantum, health tech & biotech, green tech & smart cities, startups & investors, and SMEs. Sessions will include Asia Digital AI Economy, AI Everything: AI Adoption & Commercialisation, Cybersecurity: AI-Enabled Cybersecurity & Critical Infrastructure, Digital Health, and the Supernova Pitch Competition. The event will bring together leading voices and ideas from different industries, including public services, retail, finance, education, health, and manufacturing. Be part of the action at GITEX ASIA 2025 and witness the future of technology unfold in Singapore. For more information and updates on GITEX ASIA, visit www.gitexasia.com Social media links: LinkedIn | X | Facebook | Instagram | YouTube Source link

Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation

Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4. A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI. In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity. The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology? It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all. So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins. Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies. AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline. But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone. AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word. As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically? Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not. We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer. This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side—the yin and the yang, if you will. There are always benefits and risks. The challenge is learning how to maximise the benefits for humanity, society, and business while mitigating the risks. That’s what we must focus on—ensuring AI works in service of people rather than against them. The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work? I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education. AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies. Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model. It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction. With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making? Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information. I compare it to when

WAIE 2025: With four focus areas – booth reservations now open

The 6th Global Digital Economy Industry Conference 2025 & World AI Industry Exhibition (WAIE) is scheduled from July 30 to August 1 at Shenzhen (Futian) Convention and Exhibition Centre. The event presents an opportunity for manufacturers to find industry partners and collaboration opportunities, and secure domestic and international orders. The Greater Bay Area has a reputation as a place where industries can transform and grow. Its vibrant large economy, global outreach, and important manufacturing capabilities make it the natural centre of innovation and economic growth. As a breeding ground for scientific and technological progress, it’s the place chosen to bring innovators, manufacturers, and heads of industry together under one roof. Now in its sixth year, WAIE 2025 will focus on “Key Technologies for Intelligent Industry,” and be a showcase for a range of products and services that can be demonstrated in real-world, practical scenarios.  The Digital Economy: Empowering Intelligent Industry Development WAIE 2025 will showcase a range of solutions including sensor instruments and advanced lasers, industrial optoelectronics, robotics and intelligent factories, industrial internet, software & cloud, and photovoltaic energy storage. The aim is to connect core technologies and new products with industry and supply chains. The Five Exhibition Areas Industrial Chip and Sensor Instrument Expo Intelligent industrial sensors, MEMS sensors, 3D/machine vision, laser radar, sensing elements, light sources, lenses, visual sensing, millimetre wave radar, and instrumentation & measurement. Integrated circuit design: Wafer manufacturing and advanced packaging technology, power device/MEMS sealing and testing, silicon wafer and IC packaging Semiconductors: Materials, electronic core components and parts, associated equipment. Electronic terminal applications: including 5G, human-computer interaction, AIOT, and other related products. Advanced Laser and Industrial Optoelectronics Expo Laser accessories, modules and components, fiber lasers, solid-state lasers, semiconductor lasers, new laser technologies, sheet metal cutting equipment, laser welding equipment, laser cleaning equipment, and material processing systems. Optics: Optical materials, components and elements, optical lenses, precision optical testing and measurement instruments, and optical processing equipment. Robot and Intelligent Factory Expo Industrial Robots: Industrial robot systems, collaborative robots, mobile robots, humanoid robots, robot system integration, machine vision, manipulators, reducers, related machinery devices & components, and auxiliary equipment and tools. Intelligent Factory Production Line: Industrial automation equipment, intelligent factory total solutions, SMT, non-standard automation production lines, dispensers, automated production lines, inspection/instrumentation/testing and measurement technology, intelligent assembly and transfer equipment, reducers, motors, human-machine interface (HMI), motion servos, enclosures and cabinets, embedded systems, industrial power supplies, connectors, electrical equipment, process and energy automation systems, and intelligent factory management systems. Industrial Internet, Software and Cloud Expo Industrial internet platforms including Cloud computing and big data, data centre infrastructure, data analysis and mining. Industrial Internet Network Interconnectivity including communication network services, IDC, CPS, IOT, AI, and Edge computing. Industrial internet security for network and data security Industrial Internet identification and data collection. Industrial software and integrated solutions, including industrial control software, MRO supply chain management, and embedded software. The digital factory, comprising of R&D and product development (CAX/MES/PLM), enterprise resource planning (ERP), mechanical automation, supply chain management, and intelligent factory management systems. Photovoltaic Energy Storage Industrial Application Expo Solar photovoltaic power generation, including PV production equipment, batteries, modules, raw materials, photovoltaic engineering and systems, and photovoltaic application products. Energy storage power generation, including storage batteries, energy storage technology and materials, storage systems and EPC projects, energy storage equipment and components, and storage application solutions. Lithium battery related products and equipment, technical solutions and applications, and energy management. Over 20 forum events WAIE 2025 will host more than 20 forums and summits on the applications of intelligent industrial digitalisation. There will also be award ceremonies, project matchmaking, roadshow presentations, and product launches. The event will feature more than 100 speakers and play host to over 100 experts, scholars, academicians, and entrepreneurs from diverse fields. The event aims to cover all elements of the industry supply chain, promote the development of new technologies and applications, and inject new energy and innovation into the industry. Highlight Focus Summit + Exhibition, to boost brand exposure and customer acquisition. Click to register for WAIE 2025. Source link

Anthropic provides insights into the ‘AI biology’ of Claude

Anthropic has provided a more detailed look into the complex inner workings of their advanced language model, Claude. This work aims to demystify how these sophisticated AI systems process information, learn strategies, and ultimately generate human-like text. As the researchers initially highlighted, the internal processes of these models can be remarkably opaque, with their problem-solving methods often “inscrutable to us, the model’s developers.” Gaining a deeper understanding of this “AI biology” is paramount for ensuring the reliability, safety, and trustworthiness of these increasingly powerful technologies. Anthropic’s latest findings, primarily focused on their Claude 3.5 Haiku model, offer valuable insights into several key aspects of its cognitive processes. One of the most fascinating discoveries suggests that Claude operates with a degree of conceptual universality across different languages. Through analysis of how the model processes translated sentences, Anthropic found evidence of shared underlying features. This indicates that Claude might possess a fundamental “language of thought” that transcends specific linguistic structures, allowing it to understand and apply knowledge learned in one language when working with another. Anthropic’s research also challenged previous assumptions about how language models approach creative tasks like poetry writing. Instead of a purely sequential, word-by-word generation process, Anthropic revealed that Claude actively plans ahead. In the context of rhyming poetry, the model anticipates future words to meet constraints like rhyme and meaning—demonstrating a level of foresight that goes beyond simple next-word prediction. However, the research also uncovered potentially concerning behaviours. Anthropic found instances where Claude could generate plausible-sounding but ultimately incorrect reasoning, especially when grappling with complex problems or when provided with misleading hints. The ability to “catch it in the act” of fabricating explanations underscores the importance of developing tools to monitor and understand the internal decision-making processes of AI models. Anthropic emphasises the significance of their “build a microscope” approach to AI interpretability. This methodology allows them to uncover insights into the inner workings of these systems that might not be apparent through simply observing their outputs. As they noted, this approach allows them to learn many things they “wouldn’t have guessed going in,” a crucial capability as AI models continue to evolve in sophistication. The implications of this research extend beyond mere scientific curiosity. By gaining a better understanding of how AI models function, researchers can work towards building more reliable and transparent systems. Anthropic believes that this kind of interpretability research is vital for ensuring that AI aligns with human values and warrants our trust. Their investigations delved into specific areas: Multilingual understanding: Evidence points to a shared conceptual foundation enabling Claude to process and connect information across various languages. Creative planning: The model demonstrates an ability to plan ahead in creative tasks, such as anticipating rhymes in poetry. Reasoning fidelity: Anthropic’s techniques can help distinguish between genuine logical reasoning and instances where the model might fabricate explanations. Mathematical processing: Claude employs a combination of approximate and precise strategies when performing mental arithmetic. Complex problem-solving: The model often tackles multi-step reasoning tasks by combining independent pieces of information. Hallucination mechanisms: The default behaviour in Claude is to decline answering if unsure, with hallucinations potentially arising from a misfiring of its “known entities” recognition system. Vulnerability to jailbreaks: The model’s tendency to maintain grammatical coherence can be exploited in jailbreaking attempts. Anthropic’s research provides detailed insights into the inner mechanisms of advanced language models like Claude. This ongoing work is crucial for fostering a deeper understanding of these complex systems and building more trustworthy and dependable AI. (Photo by Bret Kavanaugh) See also: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. Source link

My cart
Your cart is empty.

Looks like you haven't made a choice yet.