Despite all the recent AI hype, robots are not going to take our jobs.
There has been a lot of AI hype in recent years, ranging from fantasies of self-driving automobiles to fears of AI bots that could end the planet. Our fantasies, nightmares, and fancies have all been hijacked by AI. The truth is that AI is now considerably less developed than we had projected it to be by this point. For instance, autonomous automobiles, which are frequently cited as exemplifying AI’s boundless potential, represent a specific use case and are not yet widely used in the transportation industry as a whole.
AI Terminology De-Hyped
AI vs. ML
Machine learning and artificial intelligence are two phrases that are sometimes used interchangeably, however they refer to different ideas. AI seeks to produce intelligence, which is the ability to pass the Turing test and possess cognitive skills. Artificial intelligence (AI) is used to simulate human behavior, for as by building cleaning robots that function similarly to human cleaners.
A division of AI is ML. It includes mathematical models and relies on the combination of machines and data to function. ML functions by prioritizing the lessons it learns from occurrences. Because of this, ML is capable of tasks that humans are not capable of, such as sifting through enormous amounts of data, discovering patterns, forecasting probabilities, and more.
General vs. Narrow AI
The idea of general artificial intelligence (general AI) is the one that frequently frightens most people since it is the pinnacle of “robot overlords” replacing people. Even if this concept is technically feasible, we are not there yet.
Narrow AI, as contrast to general AI, is a specialized type of AI that is tailored for extremely certain tasks. This emphasis enables us to serve people by freeing us of work that would otherwise be too taxing or hazardous. It is not meant to take our place. Across industries, narrow AI is already being used for things like automobile manufacturing and box packaging. Narrow AI can examine activity data and logs in cybersecurity, looking for anomalies or indicators of an attack.
AI and ML in the Wild
Generative AI, supervised ML, and unsupervised ML are the three most prevalent models of AI and ML in use today.
Generative AI
Models trained on a body of knowledge, like LLMs, are a hallmark of the cutting-edge branch of artificial intelligence known as generative AI. Based on the data in that corpus, generative AI technology has the capacity to produce new material. Generative AI has been compared to “autocorrect” or “type ahead,” but on a far larger scale. Applications using Generative AI include ChatGPT, Bing, Bard, Dall-E, and specific cyber assistants like MSFT Security CoPilot and IBM Security QRadar Advisor with Watson.
For use cases like ideation, assisted copyediting, and performing research against a reliable corpus, generative AI is ideally suited.
Unsupervised Learning
In machine learning, unsupervised learning occurs when neither the training data nor the results are labeled. Using this method, algorithms may discover patterns, clusters, and linkages in data without the need for human participation. Unsupervised learning is frequently utilized in retail websites and other dynamic recommendation systems.
Unsupervised learning in cybersecurity can be used for clustering or grouping as well as for detecting patterns that weren’t previously obvious. For instance, it can be used to identify all malware with a particular signature that originates from a particular nation-state. It can also look for connections and relationships between different data sets. Identifying, for instance, if recipients of phishing emails are more inclined to reuse passwords.
The best option is not necessarily unsupervised learning.
Supervised Learning
In supervised learning, input/output pairs are used to label the training data, and the accuracy of the model depends on how well the labeling was done and how full the dataset was.
Making predictions is where supervised learning excels.
Supervised learning is used in cybersecurity for classification, which can assist in the detection of phishing and malware. The cost of a new attack can be predicted using regression analysis using the expenses of previous incidents.
If there isn’t enough time to train or no one available to classify or train the data, supervised learning isn’t the ideal option. Additionally, it is not advised if large-scale data analysis is required, if there are insufficient data, or if automated categorization or clustering is the desired outcome.
The NIST’s AI RMF (Artificial Intelligence Risk Management Framework)
It’s critical to comprehend AI’s constraints, dangers, and weaknesses while interacting with it and AI-based solutions. A set of standards and best practices called the Artificial Intelligence Risk Management Framework (AI RMF) from NIST is intended to assist businesses in identifying, evaluating, and managing the risks related to the deployment and usage of artificial intelligence technologies.
There are six components to the framework:
Valid and Reliable –
AI is capable of providing inaccurate information, which GenAI refers to as “hallucinations”. It’s crucial for businesses to confirm the accuracy and dependability of the AI they’re implementing.
Safe –
Making sure that the information requested isn’t given to other users, as happened in the infamous Samsung instance.
Secure and Resilient –
AI is being used by attackers in cyberattacks. Organizations must make sure the AI system is secure, safe from assaults, and capable of thwarting attempts to exploit it or use it to support attacks.
Accountable and Transparent –
AI is being used by attackers in cyberattacks. Organizations must make sure the AI system is secure, safe from assaults, and capable of thwarting attempts to exploit it or use it to support attacks.
Privacy-enhanced –
Ensuring the prompted information is protected and anonymized in the data lake and when used.
The MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems), the OWASP Top 10 for ML, and Google’s Secure AI Framework (SAIF) are further tools for addressing AI risk.