AI Archives - Cyber Secure Forum | Forum Events Ltd
Posts Tagged :

AI

30% of increasing demand for APIs will come from AI and LLM

960 640 Stuart O'Brien

More than 30% of the increase in demand for application programming interfaces (APIs) will come from AI and tools using large language models (LLMs) by 2026, according to Gartner.

“With technology service providers (TSPs) leading the charge in GenAI adoption, the fallout will be widespread,” said Adrian Lee, VP Analyst at Gartner. “This includes increased demand on APIs for LLM- and GenAI-enabled solutions due to TSPs helping enterprise customers further along in their journey. This means that TSPs will have to move quicker than ever before to meet the demand.”

A Gartner survey of 459 TSPs conducted from October to December 2023 found that 83% of respondents reported they either have already deployed or are currently piloting generative AI (GenAI) within their organizations.

“Enterprise customers must determine the optimal ways GenAI can be added to offerings, such as by using third-party APIs or open-source model options. With TSPs leading the charge, they provide a natural connection between these enterprise customers and their needs for GenAI-enabled solutions.”

The survey found that half of TSPs will make strategic changes to extend their core product/service offerings to realize a whole product or end-to-end services solution.

With this in mind, Gartner predicts that by 2026 more than 80% of independent software vendors will have embedded GenAI capabilities in their enterprise applications, up from less than 5% today.

“Enterprise customers are at different levels of readiness and maturity in their adoption of GenAI, and TSPs have a transformational opportunity to provide the software and infrastructure capabilities, as well as the talent and expertise, to accelerate the journey,” said Lee.

Throughout the product life cycle, TSPs need to understand the limitations, risks and overhead before embedding GenAI capabilities into products and services. To achieve this, they should:

  • Document the use case and clearly define the value that users will experience by having GenAI as part of the product.
  • Determine the optimal ways GenAI can be added to offerings (such as by using third-party APIs or open-source model options) and consider how the costs of new features may affect pricing decisions.
  • Address users’ prompting experience by building optimizations to avoid user friction with steep learning curves.
    Review the different use-case-specific risks, such as inaccurate results, data privacy, secure conversations and IP infringement, by adding guardrails specific to each risk into the product.

Photo by Growtika on Unsplash

IT experts poll: Elon Musk is ‘wrong’ that no jobs will be needed in the future

960 640 Stuart O'Brien

Elon Musk’s claim that AI will make all human jobs irrelevant should not be taken seriously, according to a survey of tech experts conducted by BCS, The Chartered Institute for IT.

During an interview with UK Prime Minister Rishi Sunak for the AI Safety Summit last year, Musk said: ‘There will come a point where no job is needed — you can have a job if you wanted to have a job … personal satisfaction, but the AI will be able to do everything.’

But in a poll by BCS, The Chartered Institute for IT, 72% of tech professionals disagreed with Musk’s view that AI will render work unnecessary. Some 14% agreed (but only 5% ‘strongly’ agreed), with the rest unsure.

In comments, many IT experts said Musk’s statement was ‘hyperbole’ and suggested it was made to create headlines.

Those currently working in computing agreed that AI could replace a range of jobs, but would also create new roles, including oversight of AI decision making – known as ‘human in the loop’.

They also said that a number of jobs, for example hairdressing, were unlikely to be replaced by AI in the near future, despite advances in robotics.

BCS’ AI and Digital in Business Life survey also found AI would have the most immediate impact this year on customer services (for example chatbots replacing human advisers).

This was followed by information technology, then health and social care, then publishing and broadcasting, then education.

Leaders ranked their top business priorities as cyber security (69%), AI (58%) and business process automation (45%).

Only 8% of participants told BCS their organisation has enough resources to achieve their priorities.

Cyber attacks were most likely to keep IT managers awake at night in 2024 – this result has been consistent over the last 11 years of the survey.

Rashik Parmar MBE, Chief Executive of BCS, The Chartered Institute for IT said: “AI won’t make work meaningless – it will redefine what we see as meaningful work.

“Tech professionals are far more concerned about how ‘ordinary’ AI is affecting people’s lives today, for example, assessing us for credit and invitations to job interviews, or being used by bad actors to generate fake news and influence elections. The priority right now is to ensure AI works with us, rather than waiting for a Utopia.

“To build trust in this transformational technology, everyone working in a responsible AI role should be a registered professional meeting the highest standards of ethical conduct.”

The BCS poll was carried out with over 800 IT professionals, ranging from IT Directors and Chief Information Officers, to software developers, academics and engineers.

Photo by Arif Riyanto on Unsplash

Is defensive AI the key to guarding against emerging cyber threats?

960 640 Stuart O'Brien

Google’s recent announcement of an artificial intelligence (AI) Cyber Defense Initiative to enhance global cybersecurity underscores the importance of defending against increasingly sophisticated and pervasive cyber threats.

And according to analysts at GlobalData, AI will play a pivotal role in collecting, processing, and neutralising threats, transforming the way organisations combat cyber risks.

Looking at AI cyber threat detection technology through the lens of innovation using GlobalData’s Technology Foresights tool reveals some compelling insights. Patent filings have surged from 387 in 2018 to 1,098 in 2023, highlighting a robust growth trajectory in AI-driven security solutions. Furthermore, the entry of 53 new companies in 2023, for a total of 239, showcases the expanding interest and investment in this critical area of technology.

Vaibhav Gundre, Project Manager of Disruptive Tech at GlobalData, said: “The ability of AI to improve threat identification, streamline the management of vulnerabilities, and enhance the efficiency of incident responses is key in addressing the continuous evolution of cyber threats. The rapid progression in the field of defensive AI is underscored by a 13% compound annual growth rate in patent applications over the last three years, reflecting a strong commitment to innovation. This trend is also indicative of the recognized importance of having formidable cyber defense systems in place, signifying substantial research and development activities aimed at overcoming new cyber threats.”

An analysis of GlobalData’s Disruptor Intelligence Center highlights the partnership between AIShield and DEKRA as a notable collaboration aimed at enhancing the security of AI models and systems. Through advanced training, assessment, and protection strategies, the partnership seeks to bolster cyber resilience across industries and foster trust in AI technologies.

Similarly, Darktrace’s collaboration with Cyware exemplifies a proactive approach to cybersecurity. By facilitating collaboration among security teams and sharing threat intelligence, the partnership enables organizations to mitigate risks and respond effectively to emerging cyber threats.

AI cyber threat detection finds application across diverse use cases, including threat detection in security cameras, real-time malware detection, network threat detection, anomaly detection in critical infrastructure, fraud prevention, and AI-powered surveillance systems.

Gundre concluded: “As organizations harness the power of AI cyber threat detection, they must also confront significant challenges. The rapid evolution of cyber threats, coupled with the complexity of regulatory landscapes, underscores the need for continuous innovation and collaboration. While patents and partnerships lay the foundation for robust cyber defense strategies, addressing these challenges will require a concerted effort from industry stakeholders. By staying vigilant and embracing a proactive approach, organizations can navigate the evolving cybersecurity landscape with confidence, safeguarding critical assets and preserving digital trust.”

Photo by Mitchell Luo on Unsplash

Is generative AI the next big cyber threat for businesses?

960 640 Stuart O'Brien

By Robert Smith, Product Manager, Cyber Security at M247

Unless you’ve been living under a rock over the past twelve months, you will have heard all about ChatGPT by now.

A shorthand for ‘Chat Generative Pre-Trained Transformer’, the smart chatbot exploded onto the tech scene in November last year, amassing 100 million users in its first two months to become the fastest growing consumer application in history. Since then, it has piqued the curiosity of almost every sector – from artists and musicians to marketers and IT managers.

ChatGPT is, in many ways, the poster child for the new wave of generative AI tools taking these sectors by storm – Bing, Google’s Vertex AI and Bard, to name a few. These tools’ user-friendly interfaces, and ability to take even the most niche, specific prompts, and convert them into everything from artwork to detailed essays, have left most of us wondering: what is next for us, and more specifically, what is next for our jobs? So much so that a report released last year found that nearly two thirds of UK workers think AI will take over more jobs than it creates.

However, while the question around AI and what it means for the future of work is certainly an important one, something that is too often overlooked in these discussions is the impact this technology is currently having on our security and safety.

The threat of ‘FraudGPT’

According to Check Point Research, the arrival of advanced AI technology had already contributed to an 8% rise in weekly cyber-attacks in the second quarter of 2023. We even asked ChatGPT if its technology is being used by cyber-criminals to target businesses. “It’s certainly possible they could attempt to use advanced language models or similar technology to assist in their activities…”, said ChatGPT.

And it was right. Just as businesses are constantly looking for new solutions to adopt, or more sophisticated tools to develop that will enhance their objectives, bad actors and cyber-criminals are doing the same. The only difference between the two is that cyber-criminals are using tools such as AI to steal your data and intercept your devices. And now we’re witnessing this in plain sight with the likes of ‘FraudGPT’ flooding the dark web.

FraudGPT is an AI-powered chatbot marketed to cyber-criminals as a tool to support the creation of malware, malicious code, phishing e-mails, and many other fraudulent outputs. Using the same user-friendly prompts as its predecessor, ChatGPT, FraudGPT and other tools are allowing hackers to take similar shortcuts and produce useful content in order to steal data and create havoc for businesses.

As with any sophisticated language model, one of FraudGPT’s biggest strengths (or threats) is its ability to produce convincing e-mails, documents and even replicate human conversations in order to steal data or gain access to a business’ systems. Very soon, it’s highly likely that those blatantly obvious phishing e-mails in your inbox may not be so easy to spot.

And it doesn’t stop there. More and more hackers are likely to start using these AI-powered tools across every stage of the cyber ‘kill chain’, leveraging this technology to develop malware, identifying vulnerabilities, and even operate their malicious attacks. There are already bots out there that can scan the entire internet within 24 hours for potential vulnerabilities to exploit, and these are constantly being updated. So, if AI is going to become a hacker’s best friend, businesses will need to evolve and adopt the latest technology too, in order to keep pace with them.

What can businesses do?

To start with, IT managers (or whoever is responsible for cyber-security within your organisation) must make it their priority to stay on top of the latest hacking methods and constantly scan for new solutions that can safeguard data.

Endpoint Threat Detection and Response (EDR) is one great example of a robust defence businesses can put in place today. EDR uses smart behavioural analysis to monitor your data and the things you usually do on your devices, and can therefore detect when there are even minor abnormalities in your daily activities. If an EDR system detects that an AI has launched an attack on your business, it can give your IT team a heads up so they can form a response and resolve the issue. In fact, most cyber insurers today insist that businesses adopt EDR as a key risk control before offering cover.

Cyber-security providers, such as Fortinet and Microsoft, have already begun incorporating AI into their solutions, too, but making sure you have the latest machine learning and AI (not just simple, predictive AI) operating in the background to detect threats will give your business the upper hand when it comes to hackers.

And finally, educate your workforce. Although many are worried that AI will overtake us in the workplace and steal our jobs, it’s unlikely the power of human intuition will be replaced anytime soon. So, by arming your team with the latest training on AI and cyber-threats – and what to do when they suspect an AI-powered threat is happening – you can outsmart this new technology and keep the hackers at bay.

Threat Predictions for 2024: Chained AI and CaaS operations give attackers more ‘easy’ buttons 

960 640 mattd

With the growth of Cybercrime-as-a-Service (CaaS) operations and the advent of generative AI, threat actors have more “easy” buttons at their fingertips to assist with carrying out attacks than ever before. By relying on the growing capabilities in their respective toolboxes, adversaries will increase the sophistication of their activities. They’ll launch more targeted and stealthier hacks designed to evade robust security controls, as well as become more agile by making each tactic in the attack cycle more efficient.

In its 2024 threat predictions report, the FortiGuard Labs team looks at a new era of advanced cybercrime, examines how AI is changing the (attack) game, shares fresh threat trends to watch for this year and beyond, and offers advice on how organisations everywhere can enhance their collective resilience against an evolving threat landscape…

The Evolution of Old Favorites

We’ve been observing and discussing many fan-favorite attack tactics for years, and covered these topics in past reports. The “classics” aren’t going away—instead, they’re evolving and advancing as attackers gain access to new resources. For example, when it comes to advanced persistent cybercrime, we anticipate more activity among a growing number of Advanced Persistent Threat (APT) groups. In addition to the evolution of APT operations, we predict that cybercrime groups, in general, will diversify their targets and playbooks, focusing on more sophisticated and disruptive attacks, and setting their sights on denial of service and extortion.

Cybercrime “turf wars” continue, with multiple attack groups homing in on the same targets and deploying ransomware variants, often within 24 hours or less. In fact, we’ve observed such a rise in this type of activity that the FBI issued a warning to organizations about it earlier this year.

And let’s not forget about the evolution of generative AI. This weaponisation of AI is adding fuel to an already raging fire, giving attackers an easy means of enhancing many stages of their attacks. As we’ve predicted in the past, we’re seeing cybercriminals increasingly use AI to support malicious activities in new ways, ranging from thwarting the detection of social engineering to mimicking human behavior.

Fresh Threat Trends to Watch for in 2024 and Beyond

While cybercriminals will always rely on tried-and-true tactics and techniques to achieve a quick payday, today’s attackers now have a growing number of tools available to them to assist with attack execution. As cybercrime evolves, we anticipate seeing several fresh trends emerge in 2024 and beyond. Here’s a glimpse of what we expect.

Give me that big (playbook) energy: Over the past few years, ransomware attacks worldwide have skyrocketed, making every organisation, regardless of size or industry, a target. Yet, as an increasing number of cybercriminals launch ransomware attacks to attain a lucrative payday, cybercrime groups are quickly exhausting smaller, easier-to-hack targets. Looking ahead, we predict attackers will take a “go big or go home” approach, with adversaries turning their focus to critical industries—such as healthcare, finance, transportation, and utilities—that, if hacked, would have a sizeable adverse impact on society and make for a more substantial payday for the attacker. They’ll also expand their playbooks, making their activities more personal, aggressive, and destructive in nature.

It’s a new day for zero days: As organisations expand the number of platforms, applications, and technologies they rely on for daily business operations, cybercriminals have unique opportunities to uncover and exploit software vulnerabilities. We’ve observed a record number of zero-days and new Common Vulnerabilities and Exposures (CVEs) emerge in 2023, and that count is still rising. Given how valuable zero days can be for attackers, we expect to see zero-day brokers—cybercrime groups selling zero-days on the dark web to multiple buyers—emerge among the CaaS community. N-days will continue to pose significant risks for organizations as well.

Playing the inside game: Many organisations are leveling up their security controls and adopting new technologies and processes to strengthen their defenses. These enhanced controls make it more difficult for attackers to infiltrate a network externally, so cybercriminals must find new ways to reach their targets. Given this shift, we predict that attackers will continue to shift left with their tactics, reconnaissance, and weaponisation, with groups beginning to recruit from inside target organisations for initial access purposes.

Ushering in “we the people” attacks: Looking ahead, we expect to see attackers take advantage of more geopolitical happenings and event-driven opportunities, such as the 2024 U.S. elections and the Paris 2024 games. While adversaries have always targeted major events, cybercriminals now have new tools at their disposal—generative AI in particular—to support their activities.

Narrowing the TTP playing field: Attackers will inevitably continue to expand the collection of tactics, techniques, and procedures (TTPs) they use to compromise their targets. Yet defenders can gain an advantage by finding ways to disrupt those activities. While most of the day-to-day work done by cybersecurity defenders is related to blocking indicators of compromise, there’s great value in taking a closer look at the TTPs attackers regularly use, which will help narrow the playing field and find potential “choke points on the chess board.”

Making space for more 5G attacks: With access to an ever-increasing array of connected technologies, cybercriminals will inevitably find new opportunities for compromise. With more devices coming online every day, we anticipate that cybercriminals will take greater advantage of connected attacks in the future. A successful attack against 5G infrastructure could easily disrupt critical industries such as oil and gas, transportation, public safety, finance, and healthcare.

Navigating a New Era of Cybercrime

Cybercrime impacts everyone, and the ramifications of a breach are often far-reaching. However, threat actors don’t have to have the upper hand. Our security community can take many actions to better anticipate cybercriminals’ next moves and disrupt their activities: collaborating across the public and private sectors to share threat intelligence, adopting standardized measures for incident reporting, and more.

Organisations also have a vital role to play in disrupting cybercrime. This starts with creating a culture of cyber resilience—making cybersecurity everyone’s job—by implementing ongoing initiatives such as enterprise-wide cybersecurity education programs and more focused activities like tabletop exercises for executives. Finding ways to shrink the cybersecurity skills gap, such as tapping into new talent pools to fill open roles, can help enterprises navigate the combination of overworked IT and security staff as well as the growing threat landscape. And threat sharing will only become more important in the future, as this will help enable the quick mobilization of protections.

Threat Predictions for 2024: Chained AI and CaaS operations give attackers more ‘easy’ buttons 

960 640 Guest Post

With the growth of Cybercrime-as-a-Service (CaaS) operations and the advent of generative AI, threat actors have more “easy” buttons at their fingertips to assist with carrying out attacks than ever before. By relying on the growing capabilities in their respective toolboxes, adversaries will increase the sophistication of their activities. They’ll launch more targeted and stealthier hacks designed to evade robust security controls, as well as become more agile by making each tactic in the attack cycle more efficient.

In its 2024 threat predictions report, the FortiGuard Labs team looks at a new era of advanced cybercrime, examines how AI is changing the (attack) game, shares fresh threat trends to watch for this year and beyond, and offers advice on how organisations everywhere can enhance their collective resilience against an evolving threat landscape…

The Evolution of Old Favorites

We’ve been observing and discussing many fan-favorite attack tactics for years, and covered these topics in past reports. The “classics” aren’t going away—instead, they’re evolving and advancing as attackers gain access to new resources. For example, when it comes to advanced persistent cybercrime, we anticipate more activity among a growing number of Advanced Persistent Threat (APT) groups. In addition to the evolution of APT operations, we predict that cybercrime groups, in general, will diversify their targets and playbooks, focusing on more sophisticated and disruptive attacks, and setting their sights on denial of service and extortion.

Cybercrime “turf wars” continue, with multiple attack groups homing in on the same targets and deploying ransomware variants, often within 24 hours or less. In fact, we’ve observed such a rise in this type of activity that the FBI issued a warning to organizations about it earlier this year.

And let’s not forget about the evolution of generative AI. This weaponisation of AI is adding fuel to an already raging fire, giving attackers an easy means of enhancing many stages of their attacks. As we’ve predicted in the past, we’re seeing cybercriminals increasingly use AI to support malicious activities in new ways, ranging from thwarting the detection of social engineering to mimicking human behavior.

Fresh Threat Trends to Watch for in 2024 and Beyond

While cybercriminals will always rely on tried-and-true tactics and techniques to achieve a quick payday, today’s attackers now have a growing number of tools available to them to assist with attack execution. As cybercrime evolves, we anticipate seeing several fresh trends emerge in 2024 and beyond. Here’s a glimpse of what we expect.

Give me that big (playbook) energy: Over the past few years, ransomware attacks worldwide have skyrocketed, making every organisation, regardless of size or industry, a target. Yet, as an increasing number of cybercriminals launch ransomware attacks to attain a lucrative payday, cybercrime groups are quickly exhausting smaller, easier-to-hack targets. Looking ahead, we predict attackers will take a “go big or go home” approach, with adversaries turning their focus to critical industries—such as healthcare, finance, transportation, and utilities—that, if hacked, would have a sizeable adverse impact on society and make for a more substantial payday for the attacker. They’ll also expand their playbooks, making their activities more personal, aggressive, and destructive in nature.

It’s a new day for zero days: As organisations expand the number of platforms, applications, and technologies they rely on for daily business operations, cybercriminals have unique opportunities to uncover and exploit software vulnerabilities. We’ve observed a record number of zero-days and new Common Vulnerabilities and Exposures (CVEs) emerge in 2023, and that count is still rising. Given how valuable zero days can be for attackers, we expect to see zero-day brokers—cybercrime groups selling zero-days on the dark web to multiple buyers—emerge among the CaaS community. N-days will continue to pose significant risks for organizations as well.

Playing the inside game: Many organisations are leveling up their security controls and adopting new technologies and processes to strengthen their defenses. These enhanced controls make it more difficult for attackers to infiltrate a network externally, so cybercriminals must find new ways to reach their targets. Given this shift, we predict that attackers will continue to shift left with their tactics, reconnaissance, and weaponisation, with groups beginning to recruit from inside target organisations for initial access purposes.

Ushering in “we the people” attacks: Looking ahead, we expect to see attackers take advantage of more geopolitical happenings and event-driven opportunities, such as the 2024 U.S. elections and the Paris 2024 games. While adversaries have always targeted major events, cybercriminals now have new tools at their disposal—generative AI in particular—to support their activities.

Narrowing the TTP playing field: Attackers will inevitably continue to expand the collection of tactics, techniques, and procedures (TTPs) they use to compromise their targets. Yet defenders can gain an advantage by finding ways to disrupt those activities. While most of the day-to-day work done by cybersecurity defenders is related to blocking indicators of compromise, there’s great value in taking a closer look at the TTPs attackers regularly use, which will help narrow the playing field and find potential “choke points on the chess board.”

Making space for more 5G attacks: With access to an ever-increasing array of connected technologies, cybercriminals will inevitably find new opportunities for compromise. With more devices coming online every day, we anticipate that cybercriminals will take greater advantage of connected attacks in the future. A successful attack against 5G infrastructure could easily disrupt critical industries such as oil and gas, transportation, public safety, finance, and healthcare.

Navigating a New Era of Cybercrime

Cybercrime impacts everyone, and the ramifications of a breach are often far-reaching. However, threat actors don’t have to have the upper hand. Our security community can take many actions to better anticipate cybercriminals’ next moves and disrupt their activities: collaborating across the public and private sectors to share threat intelligence, adopting standardized measures for incident reporting, and more.

Organisations also have a vital role to play in disrupting cybercrime. This starts with creating a culture of cyber resilience—making cybersecurity everyone’s job—by implementing ongoing initiatives such as enterprise-wide cybersecurity education programs and more focused activities like tabletop exercises for executives. Finding ways to shrink the cybersecurity skills gap, such as tapping into new talent pools to fill open roles, can help enterprises navigate the combination of overworked IT and security staff as well as the growing threat landscape. And threat sharing will only become more important in the future, as this will help enable the quick mobilization of protections.

Empowering cybersecurity with AI: A vision for the UK’s commercial and public sectors?

960 640 Stuart O'Brien

In the age of digital transformation, cybersecurity threats are becoming increasingly sophisticated, challenging the traditional security measures employed by many UK institutions. Enter Artificial Intelligence (AI) – a game-changer in the realm of cybersecurity for both the commercial and public sectors. AI’s advanced algorithms and predictive analytics offer innovative ways to bolster security infrastructure, making it a valuable ally for cybersecurity professionals…

  1. Proactive Threat Detection:
    • Function: By continuously analysing vast amounts of data, AI can identify patterns and anomalies that might indicate a security breach or an attempted attack.
    • Benefit: Rather than reacting to threats once they’ve occurred, institutions can prevent them, ensuring uninterrupted services and safeguarding sensitive data.
  2. Phishing Attack Prevention:
    • Function: AI can evaluate emails and online communications in real-time, spotting the subtle signs of phishing attempts that might be overlooked by traditional spam filters.
    • Benefit: This significantly reduces the risk of employees unknowingly granting access to unauthorised entities.
  3. Automated Incident Response:
    • Function: When a threat is detected, AI-driven systems can instantly take corrective actions, such as isolating affected devices or blocking malicious IP addresses.
    • Benefit: Swift automated responses ensure minimal damage, even when incidents occur outside regular monitoring hours.
  4. Enhanced User Authentication:
    • Function: Incorporating AI into biometric verification systems, such as facial or voice recognition, results in more accurate user identification.
    • Benefit: This curtails unauthorised access and adds an additional layer of security beyond passwords.
  5. Behavioural Analytics:
    • Function: AI algorithms can learn and monitor the typical behaviour patterns of network users. Any deviation from this pattern, such as accessing sensitive data at odd hours, raises an alert.
    • Benefit: This helps detect insider threats or compromised user accounts more effectively.
  6. Predictive Analysis:
    • Function: AI models can forecast future threat landscapes by analysing current cyberattack trends and patterns.
    • Benefit: Organisations can prepare and evolve their cybersecurity strategies in anticipation of emerging threats.
  7. Vulnerability Management:
    • Function: AI can scan systems to identify weak points or vulnerabilities, prioritising them based on potential impact.
    • Benefit: Cybersecurity professionals can address the most critical vulnerabilities first, ensuring optimal resource allocation.
  8. Natural Language Processing (NLP):
    • Function: AI-powered NLP can scan and interpret human language in documents, emails, and online communications to detect potential threats or sensitive information leaks.
    • Benefit: It provides an additional layer of scrutiny, ensuring data protection and compliance.

By harnessing the capabilities of AI, the UK’s commercial and public sectors can look forward to a more robust cybersecurity posture. Not only does AI enhance threat detection and response, but its predictive capabilities ensure that organisations are always a step ahead of potential cyber adversaries. As cyber threats continue to evolve, so too will AI’s role in countering them, underscoring its pivotal role in the future of cybersecurity.

Learn more about how AI can support your cyber defences at the Security IT Summit.

Where does GenAI fit into the data analytics landscape?

960 640 Guest Post

Recently, there has been a lot of interest and hype around Generative Artificial Intelligence (GenAI), such as ChatGPT and Bard. While these applications are more geared towards the consumer, there is a clear uptick in businesses wondering where this technology can fit into their corporate strategy. James Gornall, Cloud Architect Lead, CTS explains the vital difference between headline grabbing consumer tools and proven, enterprise level GenAI…

Understanding AI

Given the recent hype, you’d be forgiven for thinking that AI is a new capability, but in actual fact, businesses have been using some form for AI for years – even if they don’t quite realise it.

One of the many applications of AI in business today is in predictive analytics. By analysing datasets to identify patterns and predict future outcomes, businesses can more accurately forecast sales, manage inventory, detect fraud and resource requirements.

Using data visualisation tools to make complex data simpler to understand and more accessible, decision-makers can easily spot trends, correlations and outliers, leading them to make better-informed data-driven decisions, faster.

Another application of AI commonly seen is to enhance customer service through the use of AI-powered chatbots and virtual assistants that meet the digital expectations of customers, by providing instant support when needed.

So what’s new?

What is changing with the commercialisation of GenAI is the ability to create entire new datasets based on what has been learnt previously. GenAI can use the millions of images and information it has searched to write documents and create imagery at a scale never seen before. This is hugely exciting for organisations’ creative teams, providing unprecedented opportunities to create new content for ideation, testing, and learning at scale. With this, businesses can rapidly generate unique, varied content to support marketing and brand.

The technology can use data on customer behaviour to deliver quality personalised shopping experiences. For example, retailers can provide unique catalogues of products tailored to an individuals’ preferences, to create a totally immersive, personalised experience. In addition to enhancing customer predictions, GenAI can provide personalised recommendations based on past shopping choices and provide human-like interactions to enhance customer satisfaction.

Furthermore, GenAI supports employees by automating a variety of tasks, including customer service, recommendation, data analysis, and inventory management. In turn, this frees up employees to focus on more strategic tasks.

Controlling AI

The latest generation of consumer GenAI tools have transformed AI awareness at every level of business and society. In the process, they have also done a pretty good job of demonstrating the problems that quickly arise when these tools are misused. From users who may not realise the risks associated with inputting confidential code into ChatGPT, completely unaware that they are actually leaking valuable Intellectual Property (IP) that could be included in the chatbot’s future responses to other people around the world, to lawyers fined for using fictitious ChatGPT generated research in a legal case.

While this latest iteration of consumer GenAI tools is bringing awareness to the capabilities of this technology, there is a lack of education around the way it is best used. Companies need to consider the way employees may be using GenAI that could potentially jeopardise corporate data resources and reputation.

With GenAI set to accelerate business transformation, AI and analytics are rightly dominating corporate debate, but as companies adopt GenAI to work alongside employees, it is imperative that they assess the risks and rewards of cloud-based AI technologies as quickly as possible.

Trusted Data Resources

One of the concerns for businesses to consider is the quality and accuracy of the data provided by GenAI tools. This is why it is so important to distinguish between the headline grabbing consumer tools and enterprise grade alternatives that have been in place for several years.

Business specific language is key, especially in jargon heavy markets, so it is essential that the GenAI tool being used is trained on industry specific language models.

Security is also vital. Commercial tools allow a business to set up its own local AI environment where information is stored inside the virtual safety perimeter. This environment can be tailored with a business’ documentation, knowledge bases and inventories, so the AI can deliver value specific to that organisation.

While these tools are hugely intuitive, it is also important that people understand how to use them effectively.

Providing structured prompts and being specific in the way questions are asked is one thing, but users need to remember to think critically rather than simply accept the results at face value. A sceptical viewpoint is a prerequisite – at least initially. The quality of GenAI results will improve over time as the technology evolves and people learn how to feed valid data in, so they get valid data out. However, for the time being people need to take the results with a pinch of salt.

It is also essential to consider the ethical uses of AI.

Avoiding bias is a core component of any Environmental, Social and Governance (ESG) policy. Unfortunately, there is an inherent bias that exists in AI algorithms so companies need to be careful, especially when using consumer level GenAI tools.

For example, finance companies need to avoid algorithms running biassed outcomes against customers wanting to access certain products, or even receiving different interest rates based on discriminatory data.

Similarly, medical organisations need to ensure ubiquitous care across all demographics, especially when different ethnic groups experience varying risk factors for some diseases.

Conclusion

AI is delivering a new level of data democratisation, allowing individuals across businesses to easily access complex analytics that has, until now, been the preserve of data scientists. The increase in awareness and interest has also accelerated investment, transforming the natural language capabilities of chatbots, for example. The barrier to entry has been reduced, allowing companies to innovate and create business specific use cases.

But good business and data principles must still apply. While it is fantastic that companies are now actively exploring the transformative opportunities on offer, they need to take a step back and understand what GenAI means to their business. Before rushing to meet shareholder expectations for AI investment to achieve competitive advantage, businesses must first ask themselves, how can we make the most of GenAI in the most secure and impactful way?

AI: The only defence against rising cyberattacks in the education sector?

960 640 Stuart O'Brien

Scott Brooks, Technical Strategist at IT Support company Cheeky Munkey, provides expert insight on how the rise of AI is impacting cyberattacks on schools, and why AI might be the only way for schools and universities to defend themselves against more advanced attacks…

The UK’s education sector is significantly more vulnerable to cyberattacks than education sectors in other countries. In 2022, the UK’s education sector accounted for 16% of total victims on data leak sites, compared to 7% in the US and 4% in France1.

With 1,500 pupils returning to school today after an additional unplanned week off following the attack on Highgate Wood School, the need to consider how AI can be used to help protect schools against cyberattacks is more potent than ever.

Big businesses such as Google, Tesla and PayPal2 are using AI systems to improve their cybersecurity solutions.  At the same time, cybercriminals are able to use AI technology to create new cyberattack methods which are harder to defend against.

With this in mind, educational institutions must invest in learning about the new kinds of cyber threats they may face and AI cybersecurity systems. This article provides an overview of the new threats AI poses to schools and universities, as well as the reasons that educational institutions should invest in AI as a defensive system.

New AI threats to cybersecurity

Hackers using AI

It’s been found that AI is making cybercrime more accessible, with less skilled hackers using it to write scripts – enabling them to steal files3. It’s easy to see how AI can increase the number of hackers by eliminating the need for sophisticated cyber skills.

Hackers can also use machine learning to test the success of the malware they develop. Once a hacker has developed malware, they can model their attack methods to see what is detected by defences. Malware is then adapted to make it more effective, making it much harder for IT staff to catch and respond to threats.

False data can also be used to confuse AI systems. When companies use AI systems for cybersecurity, they learn from historical data to stop attacks. Cybercriminals create false positives, teaching cybersecurity AI models that these patterns and files are ‘safe’. Hackers can then exploit this to infiltrate school systems.

Imitation game

Cyber threats that would once have been categorised as ‘easy’ to repel are getting harder to defend against as AI is improving its ability to imitate humans. A key example of this is phishing emails. Bad grammar and spelling are usually telltale signs warning recipients not to click a link in an email. Attackers are now using chatbots to ensure their spelling and grammar are spot on, making it trickier for school staff to spot the red flags.

Cybersecurity skills gap

Currently, there’s a skills gap within the cybersecurity industry. It’s argued that not enough people have the skill level and knowledge required to develop and implement cybersecurity AI systems. This is because AI is developing at such a rapid pace that it’s hard for professionals to keep up4.

Hiring people with the specialised skills needed, as well as procuring the software and hardware required for AI security systems, can also be costly – especially for schools with already stretched budgets. This means that educational institutions are likely playing catch-up with hackers.

How can AI help improve cybersecurity?

Although AI can be used for ever-more sophisticated attacks, it can also be a powerful tool for improving cybersecurity.

Analysis

AI offers an improved level of cybersecurity, which can help reduce the likelihood of an attack on schools. By analysing existing security systems and identifying weak points, AI allows IT staff to make necessary changes.

Artificial intelligence systems learn to identify which patterns are normal for a network by using algorithms to assess network traffic. These systems can quickly spot when traffic is unusual and immediately alert security teams to any threats, allowing for rapid action.

In addition to preventing network attacks, AI can also be used to improve endpoint security. Devices such as laptops and smartphones are commonly targeted by hackers. To combat this threat, AI security solutions scan for malware within files – quarantining anything suspicious.

Advanced data processing

AI-based security solutions are continuously learning and can process huge volumes of data. This means that they can detect new threats and defend against them in real-time. By picking up on subtle patterns, these systems are able to detect threats that humans would likely miss. It also enables AI to keep up with ever-changing attacks better than traditional antivirus software, which relies on a database of known malware behaviours and cannot identify threats outside of that database.

The ability of AI systems to handle so much data also makes their implementation incredibly scalable. These systems can handle increasing volumes of data in cloud environments and Internet of Things devices and networks.

Working with humans

Since AI systems can automatically identify threats and communicate the severity and impact of an attack, they help cybersecurity teams to prioritise their work. This saves workers time and energy, allowing them to respond to more urgent security threats.

Task automation is another key benefit of AI for educational institutions. AI systems can automate tasks such as routine assessments of system vulnerabilities and patch management. This reduces the workload of external cybersecurity teams and allows for more efficient working, reducing costs for schools and universities. By automating these tasks, AI can alleviate the shortage of skilled workers, addressing the cyber skills gap5.

The rise of AI is understandably a cause of concern for educational institutions and teaching staff alike. Improved cyber threat capabilities mean that schools and universities need to be prepared for changing attacks. However, it’s clear that adopting AI systems is the best way for educational institutions to improve their own cybersecurity. By combining adept cybersecurity staff with artificial intelligence cybersecurity systems, educational institutions can stay ahead of new threats and improve the efficiency of their operations.

5 ways artificial intelligence is powering the creation of next-generation data centres

960 640 Stuart O'Brien

Data has become the lifeblood of organisations all over the world. The rapid advancement of artificial intelligence (AI) as a tool for data manipulation and management has transformed how we live, work, and interact with technology. 

Just about every industry is seeing exciting applications of artificial intelligence technology in the real-time analysis of large data sets to inform operational decisions. Businesses in the resources sector, retail, agriculture, healthcare, and financial services are rushing to take advantage of enormous business opportunities, but they’ll need specialist AI data centres to accomplish that.

And, with AI increasingly integrating into data centres, data analytics and management demand has reached unprecedented levels. Steve Hollingsworth,  Director at Covenco with over 30 years’ experience in data centre design explores five ways that AI is revolutionising the creation of new data centres and shaping the future of AI-powered solutions in 2023 and beyond…

Enhancing Efficiency and Scalability

The exponential growth of data generated by AI-enabled applications calls for more efficient and scalable data centres. AI is now employed to help analyse historical and real-time data patterns to optimise energy consumption, cooling mechanisms, and server allocation within data centres. By leveraging AI-enabled tools, project managers in charge of data centre design and development can achieve enhanced power efficiency, reduce operating costs, and improve the overall sustainability of their infrastructure. Additionally, AI-driven predictive analytics can enable proactive maintenance, optimised hardware deployment, and reduced system failures, leading to greater uptime and availability.

Streamlining Data Management and Analysis

The rapid adoption of AI has created a significant need for robust data management and analysis. Data centres powered by AI are increasingly adopting machine learning tools to streamline data ingestion, classification, and storage processes. AI can also help to optimise data classification, making it easier for organisations to leverage their digital assets far more effectively. Furthermore, AI-enabled tools are already performing complex analytics tasks like data clustering and anomaly detection, enabling data managers to identify patterns, spot security threats and extract actionable insights much more efficiently.

Intelligent Resource Allocation

The dynamic nature of AI workloads demands intelligent resource allocation within data centres. By leveraging AI-driven solutions, data managers can optimise workload distribution across different servers, GPUs, and storage units, ensuring efficient utilisation of resources. AI tools analyse real-time workload patterns and make intelligent decisions on load balancing, resource provisioning, and task prioritisation. This enables data managers to achieve higher performance, reduced latency, and improved scalability, which is crucial for supporting AI-driven applications with demanding computational requirements.

Security and Privacy Measures

As the reliance on AI-driven analytics grows, ensuring robust security and privacy measures becomes paramount. AI-enabled systems can be pivotal in detecting and mitigating potential security threats within data centres. Using machine learning tools, data centres can identify patterns indicative of cyberattacks, false encryption, anomalous behaviour, or unauthorised access attempts – enabling proactive threat detection and prevention. Moreover, these same AI tools can enhance data privacy by anonymising sensitive information and implementing advanced encryption techniques based on blockchain technologies, safeguarding the confidentiality of valuable data.

Predictive Analytics for Infrastructure Planning

Data managers increasingly try to anticipate and accommodate future demands on the data they manage. AI tools can help data managers analyse historical data usage patterns and trends to provide predictive insights for capacity planning and infrastructure design. By leveraging AI, data managers can also make informed decisions server deployment, storage allocation, and network bandwidth provisioning. This predictive approach ensures that all available servers, storage and networking remain available and capable of meeting the evolving demands of AI applications.

Tomorrow’s digital landscape

The explosive growth of AI-driven applications has given rise to a surge in data demands, requiring data centre managers to adapt and innovate. By harnessing the power of AI at the infrastructure and design level, data owners can significantly enhance efficiency, streamline data management, optimise resource allocation, fortify security, and plan for future growth. As AI continues to permeate every aspect of our lives, the collaboration between AI and data centres will play a pivotal role in driving innovation, powering advanced analytics, and shaping tomorrow’s digital landscape. The synergy between AI and data centres holds immense potential to unlock new opportunities and transform industries in 2023 and beyond.

  • 1
  • 2