One of the four main themes of this year's World Economic Forum in Davos was “Artificial Intelligence (AI) as a Powerhouse of Economy and Society.” There were 10-15 sessions that at least touched on AI, if not solely focused on this highly impactful technology. Many of these panels highlighted the potential benefits of generative AI and large-scale language models (LLM) for industries such as fintech, health research, and climate science, but also the potential benefits of AI in our personal lives and society. Concerns about widespread adoption were further emphasized. Potential impact.
For AI to work effectively, it needs large amounts of data to train itself. Positive outcomes of this include deep learning models that can recognize complex patterns and generate accurate predictions. For example, it enables biometric authentication for homeland security and financial fraud prevention. Currently, the most common applications of emerging AI are big data algorithms for targeted advertising and real-time translation applications, which are continually being improved as data pools are further expanded.
However, a line must be drawn between information that is made publicly available to train AI systems, and personal and proprietary data that, despite being sensitive, is exploited and analyzed without the user's consent. . One example is biometric security. While biometric security is great for securing borders, it is very personal and can be easily exploited if it falls into the wrong hands.
This brings up another concern with AI: the potential for leaks and compromise. Unfortunately, most existing AI and LLM platforms and apps (such as ChatGPT) are riddled with vulnerabilities, and many large companies have banned their use to protect their company's secrets. This trend appears to be growing in scale and scope.
Therefore, among the topics that were widely discussed at Davos were also regulations, especially those related to privacy, and the urgent need to limit the scope of AI now and in the future. Many data-related regulations are already in place, such as HIPAA, GDPR, and CCPA/CPRA, which require companies to be transparent about their use of personal information, and which otherwise require consumers to will be able to opt out of the program. We use your personal data. While this is effective in promoting accountability, regulations and policies do not actually protect data from leaks or vector attacks.
Chain Reaction, Ltd. Senior Marketing Specialist
Challenges in secure data processing
The only way to truly protect our privacy is to proactively implement the most secure and innovative technological measures at our disposal. It focuses on privacy and data encryption while enabling breakthrough technologies such as generative AI models and cloud computing tools. Get full access to large data pools to unleash your full potential.
Securing data while it is at rest (i.e., in storage) or in transit (i.e., moving within or between networks) is ubiquitous. The data is encrypted, and this is usually enough to keep it safe from unwanted access. The overwhelming challenge is how to protect data while it is in use (i.e., during processing and analysis).
A major privacy-enhancing technology currently being used at scale is confidential computing. Confidential Computing attempts to protect a company's IP and sensitive data by creating a dedicated enclave called a Trusted Execution Environment (TEE) within the server CPU where sensitive data is processed. Access to the TEE is restricted so that when that data is decrypted for processing, it cannot be accessed by any computing resources other than those used within the TEE.
One of the big problems with confidential computing is that it doesn't scale well enough to cover the scale of use cases needed to handle all possible AI models and cloud instances. Because TEEs must be created and defined for each specific use case, securing your data requires limited time, effort, and cost.
But the bigger problem with confidential computing is that it's not foolproof. Data within the TEE must be unencrypted to be processed, opening the door for quantum attack vectors to exploit vulnerabilities within the environment. If data is decrypted at any point in its lifecycle, it can be exposed. Additionally, when AI or computing tools access personal data, even TEEs lose all anonymity once decrypted.
Revolutionize data privacy
The only post-quantum technology for privacy is lattice-based fully homomorphic encryption (FHE). This allows data to remain encrypted throughout its lifecycle, including during processing. This ensures that there are no leaks or data breaches and guarantees the anonymity of your data while in use.
The benefits of FHE are felt both in the effectiveness of AI and cloud computing tools and in guaranteeing the security of individuals and businesses tasked with protecting data. For example, imagine how effective AI models for early cancer detection would be if they had access to millions of patient records instead of thousands. Still, all of these records remain securely encrypted, making them impossible to compromise or leak, and he has no patients that the model is aware of. Therefore, confidentiality is maintained at all times.
To date, one barrier that has limited the adoption and large-scale use of FHE is the enormous processing load required to overcome severe bottlenecks in memory, compute, and bandwidth. It is estimated that implementing FHE across hyperscale cloud data centers will require a million times faster acceleration of today's latest generation CPUs and GPUs. More and more software-based solutions have emerged in recent years, but they are still struggling to achieve sufficient scale to meet the computational demands of machine learning, deep learning, neural networks, and heavy algorithmic operations on the cloud. I'm having a hard time.
Only a dedicated architecture would address these specific bottlenecks, enabling real-time FHE at a TCO comparable to processing unencrypted data, and allowing end users to experience the difference between processing on a CPU or other type of processor. Make it indistinguishable. . It is thus becoming increasingly clear why his CEO of OpenAI, Sam Altman, is investing his $1 billion in the development of dedicated hardware his processors for private LLM, and hyperscale his cloud. His service providers are following suit.
Privacy: The next frontier
Now that generative AI has emerged as a centerpiece at Davos and other global forums, it is rightly gaining attention for both its potential benefits to society and its drawbacks. When analyzing the challenges posed by AI, privacy is inevitably a prominent issue.
As such, privacy is quickly becoming the next big technology industry. As technological advances that leverage our personal data surface more than ever, and as data is created and processed at exponential rates, the demand for security measures to ensure our privacy increases. is increasing.
Regulations cannot protect us. Only technical solutions can address technical problems. And when it comes to privacy, only dedicated post-quantum solutions will prevail.
We've featured the best business VPNs.
This article is produced as part of TechRadarPro's Expert Insights channel, featuring some of the brightest minds in technology today. The views expressed here are those of the author and not necessarily those of his TechRadarPro or Future plc. If you're interested in contributing, find out more here. https://www.techradar.com/news/submit-your-story-to-techradar-pro