security Archives - Cyber Secure Forum | Forum Events Ltd
Posts Tagged :

security

The risk of IT business as usual 

960 640 Stuart O'Brien

IT teams within mid-sized organisations are over-stretched. Resources are scarce, with sometimes skeleton teams responsible for all aspects of IT delivery across large numbers of users. With up to 90% of the team’s time being spent ‘keeping the lights on’, there is minimal scope for the strategic thinking and infrastructure optimisation that business leaders increasingly demand. Yet without IT, businesses cannot function. And in many cases, there will be compliance or regulatory consequences in the event of a data breach.

With cyber security threats rising daily, businesses cannot afford to focus only on Business as Usual (BAU). But without the in-house expertise in security, backup and recovery, or the time to keep existing skills and knowledge at the cutting edge, IT teams are in a high-risk catch-22.

Steve Hollingsworth, Director, Covenco and Gurdip Sohal, Sales Director, Covenco explain why a trusted IT partner with dedicated expertise in key areas such as infrastructure, backup and security to the existing IT team, is now a vital component of supporting and safeguarding business…

Unattainable Objectives

Prioritising IT activity and investment is incredibly challenging. While IT teams are being pulled from pillar to post simply to maintain essential services, there is an urgent need to make critical upgrades to both infrastructure and strategy. The challenges are those IT teams will recognise well: cyber security threats continue to increase, creating new risks that cannot be ignored. Business goals – and the reliance on IT – are evolving, demanding more resilience, higher availability and a robust data recovery strategy. Plus, of course, any changes must be achieved with sustainability in mind: a recent Gartner survey revealed that 87% of business leaders expect to increase their investment in sustainability over the next two years to support organisation-wide Environmental, Social and Governance (ESG) goals.

But how can IT Operations meet these essential goals while also responding to network glitches, managing databases and, of course, dealing with the additional demands created by Working from Home (WFH)? Especially when skills and resources are so thin on the ground. While there are some indications that the continued shortage of IT staff may abate by the end of 2023, that doesn’t help any business today.

Right now, there is simply no time to upskill or reskill existing staff. Indeed, many companies are struggling to keep hold of valuable individuals who are being tempted elsewhere by ever rising salaries. Yet the business risk created by understaffed and overstretched IT teams is very significant: in the most recent fine imposed by the Information Commissioner’s Office (ICO), for example, companies are being warned of complacency and failing to take the essential steps of upgrading software and training staff.

Differing Demands

With four out of five CEOs increasing digital technology investments to counter current economic pressures, including inflation, scarce talent, and supply constraints, according to Gartner, something has to give if resources remain so stretched. And most IT people will point immediately to the risk of cyber security breach. Few companies now expect to avoid a data breach. According to the 2022 IBM Data Breach survey, for 83% of companies, it’s not if a data breach will happen, but when. And they expect a breach to occur more than once.

The research confirms that faster is always better when detecting, responding to and recovering from threats. The quicker the resolution, the lower the business cost.  But how many IT teams have the resources on tap to feel confident in the latest securitypostures or create relevant data backup and recovery strategies?

These issues place different demands on IT teams. While most organisations will need 24/7 monitoring against the threat of a cyber-attack, in contrast establishing and then maintaining data backup and recovery policies are not skills that are required full time. Most companies need only an annual or bi-annual review and upgrade. Which is where a trusted partner with the ability to deliver an end-to-end service covering infrastructure, backup, managed services and security – that can flex up and down as the business needs it – is now becoming a core resource within the IT Operations team.

Extended Expertise Resource

A partner with dedicated technical expertise can augment existing skills in such specialist areas. These are individuals who spend every day assessing the latest technologies and solutions, who understand business needs and know how to achieve a best practice deployment quickly and, crucially, right first time.

Taking the time to understand the entire IT environment and assessing the backup and recovery needs, for example, is something that an expert can confidently and quickly achieve without the Business-as-Usual distractions a member of the IT team faces. What is the company’s Recovery Point Objective (RPO) or Recovery Time Objective (RTO)? How long will it take to get back up and running in the event of an attack or server failure? What are the priority systems? How is the business going to deal with a cyber-attack?

By focusing exclusively on where risks may lie and then implementing the right solutions quickly and effectively, a partner can de-risk the operation. From a VEEAM backup vault in the cloud or instant database copies using IBM FlashSystem, a disaster recovery plan that includes relocation or high availability with a goal of achieving a local recovery within minutes, the entire process can be achieved while allowing the IT team to concentrate on their existing, demanding, roles.

Conclusion

Whether a company needs to expand its infrastructure to support the CEO’s digital agenda or radically improve cyber security, or both, very few IT teams have either the spare capacity or dedicated expertise to deliver. Focusing on Business as Usual is, of course, an imperative – but unfortunately just not enough in a constantly changing technology landscape.

Partnering with a trusted provider with the capability to deliver a flexible end-to-end service with dedicated skills as and when required to supplement and support the overstretched IT team, is, therefore key to not only keeping the lights on, but also ensuring the business’ current and future needs are effectively addressed.

INDUSTRY SPOTLIGHT: Protect your top attack vectors, across all channels by Perception Point

960 640 Guest Post

Perception Point is a Prevention-as-a-Service company for the fastest and most accurate next-generation detection, investigation, and remediation of all threats across an organisation’s main attack vectors – email, web browsers, and cloud collaboration apps.

Perception Point streamlines the security environment for unmatched protection against spam, phishing, BEC, ATO, ransomware, malware, Zero-days, and N-days well before they reach end-users.

The use of multiple layers of next-gen static and dynamic engines along with patented technology protects organizations against malicious files, URLs, and social engineering-based techniques. All content is scanned in near real-time, ensuring no delays in receipt, regardless of scale and traffic volume. Cloud-based architecture shortens development and deployment cycles as new cyber attacks emerge, keeping you steps ahead of attackers.

The solution’s natively integrated, free of charge, and fully managed incident response service acts as a force multiplier to the SOC team, reducing management overhead, improving user experience and delivering continuous insights. By eliminating false negatives and reducing false positives to bare minimum, the solution provides proven best protection for all organizations.

Perception Point empowers security professionals to control their full security stack with one solution, viewed from an intuitive, unified dashboard. Users can add any channel, including cloud storage, CRM, instant messaging, and web apps, in just one-click to provide threat detection coverage across the entire organization.

Deployed in minutes, with no change to the enterprise’s infrastructure, the patented, cloud-native and easy-to-use service replaces cumbersome legacy systems.

Fortune 500 enterprises and organizations across the globe are preventing attacks across their email, web browsers and cloud collaboration channels with Perception Point.

Contact us to learn more about how Perception Point can secure your business. 

Connect with us on LinkedIn, Twitter, and Facebook.

What more, if anything, should governments be doing about cyber actors?

960 640 Guest Post

By Will Dixon, Global Head of the Academy and Community at ISTARI

Cyberattacks are becoming more frequent, and their potential consequences are becoming more severe. With Critical National Infrastructure and other important services constantly in the virtual crosshairs of both state actors and cybercriminals, it is entirely conceivable that an attack, or a series of attacks, will lead to significant public harm.

In the event that this happens, governments and law enforcement will find themselves facing calls to act. In the eyes of the public, we might assume that doing so would seem natural; after all, offensive cyber operations are not as risky as military operations in the real world, so why not do more to disrupt these groups?

The picture is, of course, not as simplistic. The negotiations currently taking place at the United Nations on a treaty on cybercrime are demonstrative of the complexity of getting international agreements on what constitutes a cybercrime. The penalties that should be enacted against the perpetrators and the powers global law enforcement agencies should have in order to prosecute these perpetrators are also up for debate.

That definition is fiercely contested, given the significant implications for countries such as Russia and China that want the definition to include terms allowing them to impose strict censorship laws and pursue dissidents. While this debate continues, the lack of agreed rules of the road is leading to action against cyber criminals.

Nonetheless, the relentlessness of cybercrime means that it is worth considering how governments and law enforcement should deal with cyber criminals. We have seen how knee-jerk reactions to major events have led to poor outcomes in the past. The cyber community should endeavour to avoid making the same mistakes.

Change in Policy

There needs to be more cooperation between national and supranational agencies, which includes better access to global data sources. This would require deep, scalable operations and partnerships with law enforcement agencies on an international scale. Some of these partnerships will likely involve countries that would rather not collaborate.

It will also require better collaboration between victim organisations and law enforcement, as the recent takedown of Hive, a ransomware group that targeted more than 1,500 victims in over 80 countries around the world, has shown. Close cooperation between victims and forensics investigators at the FBI ultimately allowed law enforcement to map and disrupt the entire Hive network. If law enforcement agencies want to do this on a wider scale, they must open their doors to victims and make sure that these victims are not afraid of further penalties for being more open about the events that resulted in an attack.

Implementing Positive Incentive Models

It is an unfortunate reality that there are not nearly enough cybersecurity companies or organisations that possess the bespoke capabilities, human resources, and training to safely secure the convergence of enterprise software, the Internet of Things (IoT), and Operational Technology (OT) environments associated with Critical National Infrastructure. Preventing harm to the public requires that we fix this.

While there are many negative incentive models, such as regulation and fines for non-compliance, this can only take us so far. More positive incentive models are needed, whereby the government works alongside the community to provide resources and the financial support required to create a strong ecosystem of organisations that can navigate the complexity of critical national infrastructure environments. There has been some evidence of this in the USA, such as the federal government’s investment in cybersecurity controls following the Colonial Pipeline attack. However, more meaningful public-private cooperation is needed in order to create the ecosystem of advanced capabilities we need.

Moving Forward

There is no escaping the fact that the cyber-threat level is growing, and it appears that we are on an unavoidable path towards law enforcement campaigns acting against cyber criminals. Whilst an appetite for more muscular action against cybercriminals is entirely understandable, we must also accept that it is not guaranteed to make a positive difference; campaigns against international criminal networks of other kinds have proved ineffective before. If we want to keep digital systems and the public they serve safe from harm, we need to invest more time and effort in creating the capabilities to do so.

OPINION: Don’t let fatigue be the cause of MFA bypass

960 640 Guest Post

By Steven Hope (pictured) , Product Director MFA at Intercede

If names such as Conficker, Sasser and MyDoom send a shiver down your spine, you are not alone. In the not-too-distant past computer viruses, whether simple or sophisticated had the power to cripple organisations large and small, as cybercriminals sought to wreak havoc, and gain notoriety and wealth.

For security professional’s endpoint/perimeter protection was the name of the game, with firewalls and anti-virus software providing the first line of defence. Whilst this type of malware still exists it is no longer the main attack vector, however, the threat landscape is ever evolving and, with the growth of man-in-the-middle (session hijacking), SIM hacking and targeted phishing attacks, preying on vulnerable authentication, including Multi-Factor Authentication (MFA).

In the same way that anti-virus has never been able to protect systems from 100% of trojan, worms, botnets, ransomware etc, there is no such thing as a phishing-proof solution, bar hardware-based PKI & FIDO for now. However, there are ways to be more resistant to phishing attacks. Unfortunately, the weakest form of resistance is also the most commonplace – passwords. Guess, buy or socially engineer a password and you instantly have access to whatever it is ‘protecting’, be it a social media account, or a mission-critical system. If it was deemed important enough to have a password in front of it, then the chances are that it has a degree of value, financial, or otherwise to the organisation that can be exploited.

The obvious choice, therefore, is to add another layer of security, so if the password is breached then there is another obstacle to overcome. This is commonly known as multi-factor authentication (MFA), but this can be a misnomer, if, for example, one of those factors is a poorly managed password programme (not following NIST guidelines and failing to have a Password Security Management solution). Given the weakness of passwords, MFA of this type is typically only as secure as the second factor. So, whilst potentially more secure than a standalone password, it is far from being resistant to phishing and some might argue whether this really is MFA.

Brute force attacks to guess passwords are still used today, but many cybercriminals are far more likely to focus less on cracking the computer and more on engineering the employee through techniques such as spear phishing, BEC (Business Email Compromise) and consent phishing. The aim here is to encourage the identified target to unwittingly handover the information they need.

A perfect example of this is the exploitation of the complacency surrounding push notifications (commonly known as ‘push fatigue’). Push notifications are increasingly used as the second factor when logging on to a system, or making a purchase. A message asks the account owner to accept, enter a one-time-code (OTC), or use a biometric (via the fingerprint reader on a mobile device).

Cybercriminals have learnt that bombarding accountholders with push notifications, creating a fatigue, can than result in the owner complying with their request; after all if pressing decline a few times doesn’t make the popups stop, may pressing Accept will. If they already have the username and password (readily available and traded at very low cost on the dark web) they can do as they please, whether that be making a transaction, emptying an account, downloading or deleting data. If the term ‘trojan horse’ had not already been attributed in the world of cybersecurity it would be an apt description of what cybercriminals are doing with push notifications.

So, if poorly managed passwords are weak and 2FA easily bypassed, it is a valid question to ask where that leaves authentication, especially given the lack of recognised standards (although I would encourage anyone to look at FIPS 201, published by NIST). The reality is that a multi-faceted and multi-factor authentication (MFA) approach needs to be phishing resistant. The better staff are trained (CUJO AI reported in January that 56% of Internet users try to open at least one phishing link every month), the more factors there are, the more secure you are. How far you go on the scale from passwords (not phishing resistant) to PKI (the highest level of authentication assurance) will very much depend on where you sit in the food chain and whether the organisation could be perceived to be a high value target, whether of itself or for its role in a wider and richer supply chain.

The reality for most organisations of any size is that different people and tasks will require different assurance levels, so any MFA solution used needs to have the ability to scale how credentials are applied appropriately. Authlogics Push MFA has been built with the end user in mind, giving them useful information with which to make a more informed accept/decline decision. Furthermore, after declining a logon they can simply tap the reason why and push fatigue protection will automatically kick in.

In the third quarter of 2022, the Anti-Phishing Working Group (APWG) reported 1,270,883 phishing attacks, the worst ever recorded by the group. The reason is simple – phishing works. Every expectation is that 2023 will continue to see numbers rise. However, using the right MFA as part of an overall security strategy can provide the resistance needed to repel ever more sophisticated, persistent and persuasive attacks.

Should I switch penetration testing provider every year? A pentester’s perspective…

960 640 Guest Post

By Greg Charman – Pentester at iSTORM Solutions

It’s that time again. Time to reach out to several pentest providers and get the ball rolling for scoping calls, quoting then re-quoting. Once this is completed and you’ve chosen this year’s provider – you have hope that they have availability that aligns with your timeframes.

All this in the interest of having a “fresh pair of eyes” have a look at your systems. Wouldn’t it be easier if you were able to build a relationship with the provider you will be trusting your most valuable information with?

As a pentester myself, I find that the process of planning an engagement is much more efficient for everyone involved when we already have a relationship with the client. As a consultant, my job is not only to scope, complete and report the test but to make sure that we are making the best use of your budget and our time during the process. This is much easier if I already have an understanding of your business. An insight into your organisation’s infrastructure is essential when trying to prioritise risks and enables me to identify the best techniques to accommodate those priorities. Ultimately, a pentest works best when it’s a collaborative effort between both organisations.

Another benefit of partnering with a pentest provider is to avoid the headache of tracking vulnerabilities year on year. Remediation advice is great but keeping metrics around your organisations evolving security posture can be difficult if you have data from several different sources. Why not make it easier by using a provider who can provide a consolidated view of this?

Repeat partnering with a pentest provider may also result in loyalty discounts when it comes to pricing – helping your organization utilize their budget better!

For more info on how iSTORM can provide a tailored solution for your privacy, security and pentesting needs visit: https://istormsolutions.co.uk/

Protecting data irrespective of infrastructure 

960 640 Guest Post

The cyber security threat has risen so high in recent years that most companies globally now accept that a data breach is almost inevitable. But what does this mean for the data protection and compliance officers, as well as senior managers, now personally liable for protecting sensitive company, customer and partner data?

Investing in security infrastructure is not enough to demonstrate compliance in protecting data. Software Defined Wide Area Networks (SD WAN), Firewalls and Virtual Private Networks (VPN) play a role within an overall security posture but they are Infrastructure solutions and do not safeguard data. What happens when the data crosses outside the network to the cloud or a third-party network? How is the business data on the LAN side protected if an SD WAN vulnerability or misconfiguration is exploited? What additional vulnerability is created by relying on the same network security team to both set policies and manage the environment, in direct conflict with Zero Trust guidance?

The only way to ensure the business is protected and compliant is to abstract data protection from the underlying infrastructure. Simon Pamplin, CTO, Certes Networks, insists it is now essential to shift the focus, stop relying on infrastructure security and use Layer 4 encryption to proactively protect business sensitive data irrespective of location…

Acknowledging Escalating Risk

Attitudes to data security need to change fast because today’s infrastructure-led model is creating too much risk. According to the 2022 IBM Data Breach survey, 83% of companies confirm they expect a security breach – and many accept that breaches will occur more than once. Given this perception, the question has to be asked: why are businesses still reliant on a security posture focused on locking the infrastructure down?

Clearly that doesn’t work. While not every company will experience the catastrophic impact of the four-year-long data breach that ultimately affected 300 million guests of Marriott Hotels, attackers are routinely spending months inside businesses looking for data. In 2022, it took an average of 277 days—about nine months—to identify and contain a breach. Throughout this time, bad actors have access to corporate data; they have the time to explore and identify the most valuable information. And the chance to copy and/or delete that data – depending on the attack’s objective.

The costs are huge: the average cost of a data breach in the US is now $9.44 million ($4.35 is the average cost globally). From regulatory fines – which are increasingly punitive across the globe – to the impact on share value, customer trust, even business partnerships, the long-term implications of a data breach are potentially devastating.

Misplaced Trust in Infrastructure

Yet these affected companies have ostensibly robust security postures. They have highly experienced security teams and an extensive investment in infrastructure. But they have bought into the security industry’s long perpetuated myth that locking down infrastructure, using VPNs, SD WANs and firewalls, will protect a business’ data.

As breach after breach has confirmed, relying on infrastructure security fails to provide the level of control needed to safeguard data from bad actors. For the vast majority of businesses, data is rarely restricted to the corporate network environment. It is in the cloud, on a user’s laptop, on a supplier’s network. Those perimeters cannot be controlled, especially for any business that is part of supply chain and third-party networks. How does Vendor A protect third party Supplier B when the business has no control over their network? Using traditional, infrastructure dependent security, it can’t.

Furthermore, while an SD WAN is a more secure way of sending data across the Internet, it only provides control from the network egress point to the end destination. It provides no control over what happens on an organisation’s LAN side. It cannot prohibit data being forwarded on to another location or person. Plus, of course, it is accepted that SD WAN misconfiguration can add a risk of breach, which means the data is exposed – as shown by the public CVE’s (Common Vulnerabilities and Exposures) available to review on most SD WAN vendors’ websites. And while SD WANs, VPNs and firewalls use IPSEC as an encryption protocol, their approach to encryption is flawed: the encryption keys and management are handled by the same group, in direct contravention of accepted zero trust standards of “Separation of Duties”.

Protect the Data

It is, therefore, essential to take another approach, to focus on protecting the data. By wrapping security around the data, a business can safeguard this vital asset irrespective of infrastructure. Adopting Layer 4, policy-based encryption ensures the data payload is protected for its entire journey – whether it was generated within the business or by a third party.

If it crosses a misconfigured SD WAN, the data is still safeguarded: it is encrypted, making it valueless to any hacker. However long an attack may continue, however long an individual or group can be camped out in the business looking for data to use in a ransomware attack, if the sensitive data is encrypted, there is nothing to work with.

The fact that the payload data only is encrypted, while header data remains in the clear means minimal disruption to network services or applications, as well as making troubleshooting an encrypted network easier.

This mindset shift protects not only the data and, by default, the business, but also the senior management team responsible – indeed personally liable – for security and information protection compliance. Rather than placing the burden of data protection onto network security teams, this approach realises the true goal of zero trust: separating policy setting responsibility from system administration. The securityposture is defined from a business standpoint, rather than a network security and infrastructure position – and that is an essential and long overdue mindset change.

Conclusion

This mindset change is becoming critical – from both a business and regulatory perspective. Over the past few years, regulators globally have increased their focus on data protection. From punitive fines, including the maximum with its €20 million (or 25% of global revenue, whichever is the higher) per breach of European Union’s General Data Protection Regulation (GDPR) to the risk of imprisonment, the rise in regulation across China and the Middle East reinforces the global clear recognition that data loss has a material cost to businesses.

Until recently, however, regulators have not been prescriptive about the way in which that data is secured – an approach that has allowed the ‘lock down infrastructure’ security model to continue. This attitude is changing.  In North America, new laws demand encryption between Utilities’ Command and Control centres to safeguard national infrastructure. This approach is set to expand as regulators and businesses recognise that the only way to safeguard data crossing increasingly dispersed infrastructures, from SD WAN to the cloud, is to encrypt it – and do so in a way that doesn’t impede the ability of the business to function.

It is now essential that companies recognise the limitations of relying on SD WANs, VPNs and firewalls. Abstracting data protection from the underlying infrastructure is the only way to ensure the business is protected and compliant.

The secrets of no drama data migration

960 640 Guest Post

With Mergers, Acquisitions and Divestments at record levels, the speed and effectiveness of data migration has come under the spotlight. Every step of this data migration process raises concerns, especially in spin-off or divestment deals where just one part of the business is moving ownership. 

What happens if confidential information is accessed by the wrong people? If supplier requests cannot be processed? If individuals with the newly acquired operation have limited access to vital information and therefore do not feel part of the core buyer’s business? The implications are widespread – from safeguarding Intellectual Property, to staff morale, operational efficiency, even potential breach of financial regulation for listed companies.

With traditional models for data migration recognised to be high risk, time consuming and can potentially derail the deal, Don Valentine, Commercial Director at Absoft explains the need for a different approach – one that not only de-risks the process but adds value by reducing the time to migrate and delivering fast access to high quality, transaction level data…

Recording Breaking

2021 shattered Merger & Acquisition (M&A) records – with M&A volume hitting over$5.8 trillion globally. In addition to whole company acquisitions, 2021 witnessed announcements of numerous high-profile deals, from divestments to spin-offs and separations. But M&A performance history is far from consistent. While successful mergers realise synergies, create cost savings and boost revenues, far too many are derailed by cultural clashes, a lack of understanding and, crucially, an inability to rapidly combine the data, systems and processes of the merged operations.

The costs can be very significant, yet many companies still fail to undertake the data due diligence required to safeguard the M&A objective. Finding, storing and migrating valuable data is key, before, during, and post M&A activity. Individuals need access to data during the due diligence process; they need to migrate data to the core business to minimise IT costs while also ensuring the acquired operation continues to operate seamlessly.  And the seller needs to be 100% confident that only data pertinent to the deal is ever visible to the acquiring organisation.

Far too often, however, the data migration process adds costs, compromises data confidentiality and places significant demands on both IT and business across both organisations.

Data Objectives

Both buyer and seller have some common data migration goals. No one wants a long-drawn-out project that consumes valuable resources. Everyone wants to conclude the deal in the prescribed time. Indeed, completion of the IT integration will be part of the Sales & Purchase Agreement (SPA) and delays could have market facing implications. Companies are justifiably wary of IT-related disruption, especially any downtime to essential systems that could compromise asset safety, production or efficiency; and those in the business do not want to be dragged away from core operations to become embroiled in data quality checking exercises.

At the same time, however, there are differences in data needs that can create conflict. While the seller wants to get the deal done and move on to the next line in the corporate agenda, the process is not that simple. How can the buyer achieve the essential due diligence while meeting the seller’s need to safeguard non-deal related data, such as HR, financial history and sensitive commercial information? A seller’s CIO will not want the buying company’s IT staff in its network, despite acknowledging the buyer needs to test the solution. Nor will there be any willingness to move the seller’s IT staff from core strategic activity to manage this process.

For the buyer it is vital to get access to systems. It is essential to capture vital historic data, from stock movement to asset maintenance history. The CIO needs early access to the new system, to provide confidence in the ability to operate effectively after the transition – any concerns regarding data quality or system obsolescence need to be flagged and addressed early in the process. The buyer is also wary of side-lining key operations people by asking them to undertake testing, training and data assurance.

While both organisations share a common overarching goal, the underlying differences in attitudes, needs and expectations can create serious friction and potentially derail the data assurance process, extend the SPA, even compromise the deal.

Risky Migration

To date processes for managing finding, storing and managing data pre, during and post M&A activity have focused on the needs of the selling company. The seller provided an extract of the SAP system holding the data relevant to the agreed assets and shared that with the buyer. The buyer then had to create configuration and software to receive the data; then transform the data, and then application data migration to provide operational support for key functions such as supplier management.

This approach is fraught with risk. Not only is the buyer left blind to data issues until far too late but the entire process is time consuming. It also typically includes only master data, not the transactional history required, due to the serious challenges and complexity associated with mimicking the chronology of transactional data loading. Data loss, errors and mis-mapping are commonplace – yet only discovered far too late in the process, generally after the M&A has been completed, leaving the buyer’s IT team to wrestle with inaccuracy and system obsolescence.

More recently, different approaches have been embraced, including ‘behind the firewall’ and ‘copy/raze’.  The former has addressed some of the concerns by offering the buyer access to the technical core through a temporary separated network that houses the in-progress build of the buyer’s systems. While this avoids the need to let the buyer into the seller’s data and reduces the migration process as well as minimising errors, testing, training and data assurance, it is flawed. It still requires the build of extract and load programs and also uses only master data for the reasons stated above. It doesn’t address downtime concerns because testing and data assurance is still required. And it still demands the involvement of IT resources in non-strategic work.  Fundamentally, this approach is still a risk to the SPA timeframe – and therefore does not meet the needs of buyer or seller.

The ‘copy/raze’ approach has the benefit of providing transactional data. The seller creates an entire copy and then deletes all data relating to assets not being transferred before transferring to the buyer. However, this model requires an entire portfolio of delete programmes which need to be tested – a process that demands business input. Early visibility of the entire data resources ensures any problems that could affect the SPA can be flagged but the demands on the business are also significant – and resented.

De-risking Migration

A different approach is urgently required. The key is to take the process into an independent location. Under agreement between buyer, seller and data migration expert, the seller provides the entire technical core which is then subjected to a dedicated extract utility. Configuration is based on the agreed key deal assets, ensuring the extraction utility automatically undertakes SAP table downloads of only the data related to these assets – removing any risks associated with inappropriate data access. The process is quicker and delivers better quality assurance. Alternatively, the ‘copy/raze’ approach can be improved by placing the entire SAP system copy into escrow – essentially a demilitarised zone (DMZ) in the cloud – on behalf of both parties.  A delete utility is then used to eradicate any data not related to the deal assets – with the data then verified by the seller before the buyer has any access. Once confirmed, the buyer gains access to test the new SAP system prior to migration.

These models can be used separately and in tandem, providing a data migration solution with no disruption and downtime reduced from weeks to a weekend. The resultant SAP solution can be optimally configured as part of the process, which often results in a reduction in SAP footprint, with the attendant cost benefits.  Critically, because the buyer gains early access to the transaction history, there is no need for extensions for the SPA – while the seller can be totally confident that only the relevant data pertaining to the deal is ever visible to the buyer.

Conclusion

By embracing a different approach to data migration, organisations can not only assure data integrity and minimise the downtime associated with data migration but also reduce the entire timescale. By cutting the data due diligence and migration process from nine months to three, the M&A SPA can be significantly shorter, reducing the costs associated with the transaction while enabling the buyer to confidently embark upon new strategic plans.

80% of software supply chains exposed to attack

960 640 Stuart O'Brien

Four in five (80%) IT decision makers stated that their organisation had received notification of attack or vulnerability in its supply chain of software in the last 12 months, with the operating system and web browser creating the biggest impact.

That’s according to new research from BlackBerry, which shows that following a software supply chain attack, respondents reported significant operational disruption (59%), data loss (58%) and reputational impact (52%), with nine out of ten organisations (90%) taking up to a month to recover.

The results come at a time of increased U.S. regulatory and legislative interest in addressing software supply chain security vulnerabilities.

The survey of 1,500 IT decision makers and cybersecurity leaders across North America, the United Kingdom and Australia revealed the significant challenge of securing software supply chains against cyberattack, even with rigorous use of recommended measures such as data encryption, Identity Access Management (IAM) and Secure Privileged Access Management (PAM) frameworks.

Despite enforcing these measures across partners, more than three-quarters (77%) of respondents had, in the last 12 months, discovered unknown participants within their software supply chain that they were not previously aware of and that they had not been monitoring for adherence to critical security standards.

“While most have confidence that their software supply chain partners have policies in place of at least comparable strength to their own, it is the lack of granular detail that exposes vulnerabilities for cybercriminals to exploit,” said Christine Gadsby, VP, Product Security at BlackBerry. “Unknown components and a lack of visibility on the software supply chain introduce blind spots containing potential vulnerabilities that can wreak havoc across not just one enterprise, but several, through loss of data and intellectual property and operational downtime, along with financial and reputational impact. How companies monitor and manage cybersecurity in their software supply chain has to rely on more than just trust.”

Results also revealed that while, on average, organisations were found to perform a quarterly inventory of their own software environment, they were prevented from more frequent monitoring by factors including a lack of skills (54%) and visibility (44%). In fact, 71% said they would welcome tools to improve inventory of software libraries within their supply chain and provide greater visibility to software impacted by a vulnerability. Similarly, 72% were in favour of greater governmental oversight of open-source software to make it more secure against cyber threats.

In the event of a breach, 62% of respondents agree that speed of communications is paramount and 63% would prefer a consolidated event management system for contacting internal security stakeholders and external partners. Yet only 19% have this kind of communications system in place. Multiple systems are in place with the remaining 81%, despite only 28% of respondents saying that they need to tailor communications to different stakeholder groups.

CIOs ‘need to accelerate time to value’ from digital investments

960 640 Stuart O'Brien

CIOs and IT leaders must take action to accelerate time to value and drive top- and bottom-line enterprise growth from digital investments.

That’s according to Gartner’s annual global survey of CIOs and technology executives, which gathered data from 2,203 CIO respondents in 81 countries and all major industries, representing approximately $15 trillion in revenue/public-sector budgets and $322 billion in IT spending.

“The pressure on CIOs to deliver digital dividends is higher than ever,” said Daniel Sanchez Reina, VP Analyst at Gartner. “CEOs and boards anticipated that investments in digital assets, channels and digital business capabilities would accelerate growth beyond what was previously possible. Now, business leadership expects to see these digital-driven improvements reflected in enterprise financials.

“CIOs expect IT budgets to increase 5.1% on average in 2023 – lower than the projected 6.5% global inflation rate. A triple squeeze of economic pressure, scarce and expensive talent and ongoing supply challenges is heightening the desire and urgency to realize time to value.”

The survey analysis revealed four ways in which CIOs can deliver digital dividends and demonstrate the financial impact of technology investments:

Prioritize the Right Digital Initiatives

Survey respondents ranked their executives’ objectives for digital technology investment over the last two years. The top two objectives were to improve operational excellence (53%) and improve customer or citizen experience (45%). In comparison, only 27% cited growing revenue as a primary objective and 22% cited improving cost efficiency.

“CIOs must prioritize digital initiatives with market-facing, growth impact,” said Janelle Hill, Distinguished VP Analyst, Gartner. “For some CIOs, this means stepping out of their comfort zone of internal back-office automation to instead focus on customer or constituent-facing initiatives.”

The survey revealed that CIOs’ future technology plans remain focused on optimization rather than growth. CIOs’ top areas of increased investment for 2023 include cyber and information security (66%), business intelligence/data analytics (55%) and cloud platforms (50%). However, just 32% are increasing investment in artificial intelligence (AI) and 24% in hyperautomation.

“Leading CIOs are more likely to leverage data, analytics and AI to detect emerging consumer behavior or sentiment that might represent a growth opportunity,” added Hill.

Create a Metrics Hierarchy

The survey found that 95% of organizations struggle with developing a vision for digital change, often due to competing expectations from different stakeholders. To drive financial outcomes, CIOs must reconcile siloed initiatives by using a visual metrics hierarchy to communicate and demonstrate interdependencies across related digital initiatives.

“A key ingredient needed to accelerate delivery of digital benefits is accountability,” said Hill. “For example, if the enterprise undertakes a digital initiative to improve customer experience, with the financial goal of improving profit margins, then the CIO’s accountable partner is likely the CMO.”

CIOs should connect with functional leaders for each digital initiative to understand what ‘improvement’ means and how it can be measured. Creating a picture that reflects the hierarchy of technical and business outcome metrics for each initiative will help identify the chain of accountability that will collectively deliver the dividend in focus.

Contribute IT Talent to a Business-Led Fusion Team

While strategic engagement with business unit leaders is necessary to accelerate digital initiatives, the survey exposed an IT mindset of “go it alone” regarding solution delivery. For example, 77% of CIOs said that IT employees are primarily providing innovation and collaboration tools, compared with 18% who said non-IT personnel are providing these tools.

“Over-dependence on IT staff for digital delivery reflects a traditional mindset, which can impede agility,” said Sanchez Reina. “CIOs must embrace democratized digital delivery by design to accelerate time to value. Equipping and empowering those outside of IT – especially business technologists – to build digitalized capabilities, assets and channels can help achieve business goals faster.”

Loaning IT staff to fusion teams that combine business experts, business technologists and IT staff will catalyze a team that is focused on achieving digital business outcomes, while also opening the way for reciprocity, such as integrating subject-matter experts from the business into an IT-led fusion team.

Reduce the Talent Gap with Unconventional Resources

Many CIOs continue to struggle to hire and retain IT talent to accelerate digital initiatives. However, the survey identified numerous sources of technology talent that are untapped. For example, only 12% of enterprises use students (through internships and relationships with schools) to help develop technological capabilities and only 23% use gig workers.

“Talent shortages are among the greatest hindrances to digital,” said Sanchez Reina. “CIOs are often limited by policies related to preferred providers or employment contracts. They must stress to business and HR leadership that engaging unconventional talent sources can help accelerate the realization of digital dividends.”

Access Control

Security Information & Event Management (SIEM) spend to exceed $6.4bn by 2027

960 640 Stuart O'Brien

A new study from Juniper Research has found that the total business spend on SIEM (Security Identity & Event Management) will exceed $6.4 billion globally by 2027, from just over $4.4 billion in 2022.

It predicts that this growth of 45% will be driven by the transition from term licence (where businesses can use SIEM for specific licence lengths) to more flexible SaaS (Software-as-a-Service) models (where SIEM solutions are purchased via monthly subscription). This will enable small businesses to access previously unaffordable services.

A SIEM system is a combination of SIM (Security Information Management) & SEM (Security Event Management), which results in real-time automated analysis of security alerts generated by applications and network hardware; leading to improved corporate cybersecurity.

IBM Tops Juniper Research Competitor Leaderboard

The research identified the world’s leading SIEM providers by evaluating their offerings, and the key factors that have led to their respective success, such as the breadth and depth of their platforms.

The top 3 vendors are:
1.    IBM
2.    Rapid7
3.    Splunk

Research co-author Nick Maynard said: “Juniper Research has ranked IBM as leading in the global SIEM market, based on its highly successful analytics platform and its ease of integration. SIEM vendors aiming to compete must design scalable solutions that are accessible to smaller businesses, which can provide easy-to-understand, actionable insights for less experienced cybersecurity teams.”

Transition to SaaS Accelerating Rapidly

Additionally, the research found that SaaS business models within SIEM are gaining traction; accounting for almost 73% of global business spend on SIEM in 2027, from only 37% in 2022. This significant increase represents an opportunity for newer vendors to break into the market with appealing SaaS-based models, but SIEM vendors must be careful not to leave larger enterprises, which still prefer term licences, behind.

To find out more, see the new report: Security Information & Event Management: Key Trends, Competitor Leaderboard & Market Forecasts 2022-2027.