Guest Post, Author at Cyber Secure Forum | Forum Events Ltd - Page 3 of 14
Posts By :

Guest Post

Should I switch penetration testing provider every year? A pentester’s perspective…

960 640 Guest Post

By Greg Charman – Pentester at iSTORM Solutions

It’s that time again. Time to reach out to several pentest providers and get the ball rolling for scoping calls, quoting then re-quoting. Once this is completed and you’ve chosen this year’s provider – you have hope that they have availability that aligns with your timeframes.

All this in the interest of having a “fresh pair of eyes” have a look at your systems. Wouldn’t it be easier if you were able to build a relationship with the provider you will be trusting your most valuable information with?

As a pentester myself, I find that the process of planning an engagement is much more efficient for everyone involved when we already have a relationship with the client. As a consultant, my job is not only to scope, complete and report the test but to make sure that we are making the best use of your budget and our time during the process. This is much easier if I already have an understanding of your business. An insight into your organisation’s infrastructure is essential when trying to prioritise risks and enables me to identify the best techniques to accommodate those priorities. Ultimately, a pentest works best when it’s a collaborative effort between both organisations.

Another benefit of partnering with a pentest provider is to avoid the headache of tracking vulnerabilities year on year. Remediation advice is great but keeping metrics around your organisations evolving security posture can be difficult if you have data from several different sources. Why not make it easier by using a provider who can provide a consolidated view of this?

Repeat partnering with a pentest provider may also result in loyalty discounts when it comes to pricing – helping your organization utilize their budget better!

For more info on how iSTORM can provide a tailored solution for your privacy, security and pentesting needs visit: https://istormsolutions.co.uk/

Protecting data irrespective of infrastructure 

960 640 Guest Post

The cyber security threat has risen so high in recent years that most companies globally now accept that a data breach is almost inevitable. But what does this mean for the data protection and compliance officers, as well as senior managers, now personally liable for protecting sensitive company, customer and partner data?

Investing in security infrastructure is not enough to demonstrate compliance in protecting data. Software Defined Wide Area Networks (SD WAN), Firewalls and Virtual Private Networks (VPN) play a role within an overall security posture but they are Infrastructure solutions and do not safeguard data. What happens when the data crosses outside the network to the cloud or a third-party network? How is the business data on the LAN side protected if an SD WAN vulnerability or misconfiguration is exploited? What additional vulnerability is created by relying on the same network security team to both set policies and manage the environment, in direct conflict with Zero Trust guidance?

The only way to ensure the business is protected and compliant is to abstract data protection from the underlying infrastructure. Simon Pamplin, CTO, Certes Networks, insists it is now essential to shift the focus, stop relying on infrastructure security and use Layer 4 encryption to proactively protect business sensitive data irrespective of location…

Acknowledging Escalating Risk

Attitudes to data security need to change fast because today’s infrastructure-led model is creating too much risk. According to the 2022 IBM Data Breach survey, 83% of companies confirm they expect a security breach – and many accept that breaches will occur more than once. Given this perception, the question has to be asked: why are businesses still reliant on a security posture focused on locking the infrastructure down?

Clearly that doesn’t work. While not every company will experience the catastrophic impact of the four-year-long data breach that ultimately affected 300 million guests of Marriott Hotels, attackers are routinely spending months inside businesses looking for data. In 2022, it took an average of 277 days—about nine months—to identify and contain a breach. Throughout this time, bad actors have access to corporate data; they have the time to explore and identify the most valuable information. And the chance to copy and/or delete that data – depending on the attack’s objective.

The costs are huge: the average cost of a data breach in the US is now $9.44 million ($4.35 is the average cost globally). From regulatory fines – which are increasingly punitive across the globe – to the impact on share value, customer trust, even business partnerships, the long-term implications of a data breach are potentially devastating.

Misplaced Trust in Infrastructure

Yet these affected companies have ostensibly robust security postures. They have highly experienced security teams and an extensive investment in infrastructure. But they have bought into the security industry’s long perpetuated myth that locking down infrastructure, using VPNs, SD WANs and firewalls, will protect a business’ data.

As breach after breach has confirmed, relying on infrastructure security fails to provide the level of control needed to safeguard data from bad actors. For the vast majority of businesses, data is rarely restricted to the corporate network environment. It is in the cloud, on a user’s laptop, on a supplier’s network. Those perimeters cannot be controlled, especially for any business that is part of supply chain and third-party networks. How does Vendor A protect third party Supplier B when the business has no control over their network? Using traditional, infrastructure dependent security, it can’t.

Furthermore, while an SD WAN is a more secure way of sending data across the Internet, it only provides control from the network egress point to the end destination. It provides no control over what happens on an organisation’s LAN side. It cannot prohibit data being forwarded on to another location or person. Plus, of course, it is accepted that SD WAN misconfiguration can add a risk of breach, which means the data is exposed – as shown by the public CVE’s (Common Vulnerabilities and Exposures) available to review on most SD WAN vendors’ websites. And while SD WANs, VPNs and firewalls use IPSEC as an encryption protocol, their approach to encryption is flawed: the encryption keys and management are handled by the same group, in direct contravention of accepted zero trust standards of “Separation of Duties”.

Protect the Data

It is, therefore, essential to take another approach, to focus on protecting the data. By wrapping security around the data, a business can safeguard this vital asset irrespective of infrastructure. Adopting Layer 4, policy-based encryption ensures the data payload is protected for its entire journey – whether it was generated within the business or by a third party.

If it crosses a misconfigured SD WAN, the data is still safeguarded: it is encrypted, making it valueless to any hacker. However long an attack may continue, however long an individual or group can be camped out in the business looking for data to use in a ransomware attack, if the sensitive data is encrypted, there is nothing to work with.

The fact that the payload data only is encrypted, while header data remains in the clear means minimal disruption to network services or applications, as well as making troubleshooting an encrypted network easier.

This mindset shift protects not only the data and, by default, the business, but also the senior management team responsible – indeed personally liable – for security and information protection compliance. Rather than placing the burden of data protection onto network security teams, this approach realises the true goal of zero trust: separating policy setting responsibility from system administration. The securityposture is defined from a business standpoint, rather than a network security and infrastructure position – and that is an essential and long overdue mindset change.

Conclusion

This mindset change is becoming critical – from both a business and regulatory perspective. Over the past few years, regulators globally have increased their focus on data protection. From punitive fines, including the maximum with its €20 million (or 25% of global revenue, whichever is the higher) per breach of European Union’s General Data Protection Regulation (GDPR) to the risk of imprisonment, the rise in regulation across China and the Middle East reinforces the global clear recognition that data loss has a material cost to businesses.

Until recently, however, regulators have not been prescriptive about the way in which that data is secured – an approach that has allowed the ‘lock down infrastructure’ security model to continue. This attitude is changing.  In North America, new laws demand encryption between Utilities’ Command and Control centres to safeguard national infrastructure. This approach is set to expand as regulators and businesses recognise that the only way to safeguard data crossing increasingly dispersed infrastructures, from SD WAN to the cloud, is to encrypt it – and do so in a way that doesn’t impede the ability of the business to function.

It is now essential that companies recognise the limitations of relying on SD WANs, VPNs and firewalls. Abstracting data protection from the underlying infrastructure is the only way to ensure the business is protected and compliant.

Nine cyber security trends to watch out for in 2023

960 640 Guest Post

By Miri Marciano, Associate Director, Cybersecurity Expert at Boston Consulting Group

Here’s what organisations should be on the look out for in an increasingly volatile environment, where attackers are constantly finding new ways to access sensitive information and take control of vital systems. The wider issue will be to make sure the recovery system an organisation is fool-proof – ensuring they can bounce back from an attack in an instant.

As we develop new technologies such as the metaverse, organisations must be on the look out for new tools that attackers will be using. It is critical they consider the following trends that we will see in 2023…

  1. Cyber will continue to be a big business

Cyber will always be a huge business and as we’ve seen this year, as long as new technologies are being developed, there will always be more hackers. Effective cyber protection is now regarded as a significant competitive advantage and security has become a major focus at board level of public and private organizations as an area of ongoing strategic investment – this is a key learning for next year.

2. There will be an increase in attack surface expansion

The extensive use of cloud applications by remote staff, customers, suppliers, and third parties has multiplied the attack vectors and vulnerabilities across complex, interconnected tech supply chains. There has also been exponential growth in connected low security IoT devices, adding to the rapidly growing attack surface. We also continue to feel the impact of geopolitics on the cybersecurity threat landscape.

3. Geopolitics will impact the cybersecurity threat landscape

Governments are starting to attack countries or critical infrastructure and this will grow more in 2023. The attacks won’t be to gain anything of monetary value but will be more so an act of terrorism. Or an aditional weapon when having a kinetic confrontation of parties.

4. Ransomware will continue to rank highest in terms of types of threats

In terms of types of attacks, ransomware has grown as a threat this year in the shape of double extortion, including data exfiltration, ransomware as a service and massive DDOS attacks. With these increasing threats, there must be an increase in talent and businesses are having to outsource to MSSPs as the job market is highly competitive in the cybersecurity sector.

5. An increase in supply chain attacks

Threat groups will increase their interest and capability in supply chain attacks and attacks against Managed Security Services Providers (MSSPs).

6. AI and machine learning will be made use of

Attackers will increase their use of AI and machine learning, as well as other technologies, to launch increasingly sophisticated attacks. Social engineering-based attacks will be strengthened by AI and ML. It is simpler and faster to gather data on businesses and employees using these capabilities.

It is an effective tool for cybercriminals because of its ability to anticipate what’s happening now and what might happen in the future.

On the other hand, AI can strengthen cybersecurity – powered systems such as SIEM capabilities allow security teams to detect threats faster and respond to incidents quicker. Higher capabilities create correlations, automation and more.

7. There will be a talent shortage

There will continue to be a highly competitive labour market for cyber talent. Organisations are increasingly investing in automation and orchestration to address cybersecurity tasks.

They will outsource to specialised services providers (MSSPs) rather than on-premise deployment.

8. The govenment will need to act

Nations will need to ensure protection and safeguarding of critical national infrastructure and services. Governments need to look at adapting regulations, data protection policies and compliance requirements and invest in building a culture of security awareness across organisations.

9. The main focus will be on recovery

Organisations will shift towards additional investing in recovery and restoration to prepare for managing a crisis – they will need to understand that a crisis is just a matter of time.

The secrets of no drama data migration

960 640 Guest Post

With Mergers, Acquisitions and Divestments at record levels, the speed and effectiveness of data migration has come under the spotlight. Every step of this data migration process raises concerns, especially in spin-off or divestment deals where just one part of the business is moving ownership. 

What happens if confidential information is accessed by the wrong people? If supplier requests cannot be processed? If individuals with the newly acquired operation have limited access to vital information and therefore do not feel part of the core buyer’s business? The implications are widespread – from safeguarding Intellectual Property, to staff morale, operational efficiency, even potential breach of financial regulation for listed companies.

With traditional models for data migration recognised to be high risk, time consuming and can potentially derail the deal, Don Valentine, Commercial Director at Absoft explains the need for a different approach – one that not only de-risks the process but adds value by reducing the time to migrate and delivering fast access to high quality, transaction level data…

Recording Breaking

2021 shattered Merger & Acquisition (M&A) records – with M&A volume hitting over$5.8 trillion globally. In addition to whole company acquisitions, 2021 witnessed announcements of numerous high-profile deals, from divestments to spin-offs and separations. But M&A performance history is far from consistent. While successful mergers realise synergies, create cost savings and boost revenues, far too many are derailed by cultural clashes, a lack of understanding and, crucially, an inability to rapidly combine the data, systems and processes of the merged operations.

The costs can be very significant, yet many companies still fail to undertake the data due diligence required to safeguard the M&A objective. Finding, storing and migrating valuable data is key, before, during, and post M&A activity. Individuals need access to data during the due diligence process; they need to migrate data to the core business to minimise IT costs while also ensuring the acquired operation continues to operate seamlessly.  And the seller needs to be 100% confident that only data pertinent to the deal is ever visible to the acquiring organisation.

Far too often, however, the data migration process adds costs, compromises data confidentiality and places significant demands on both IT and business across both organisations.

Data Objectives

Both buyer and seller have some common data migration goals. No one wants a long-drawn-out project that consumes valuable resources. Everyone wants to conclude the deal in the prescribed time. Indeed, completion of the IT integration will be part of the Sales & Purchase Agreement (SPA) and delays could have market facing implications. Companies are justifiably wary of IT-related disruption, especially any downtime to essential systems that could compromise asset safety, production or efficiency; and those in the business do not want to be dragged away from core operations to become embroiled in data quality checking exercises.

At the same time, however, there are differences in data needs that can create conflict. While the seller wants to get the deal done and move on to the next line in the corporate agenda, the process is not that simple. How can the buyer achieve the essential due diligence while meeting the seller’s need to safeguard non-deal related data, such as HR, financial history and sensitive commercial information? A seller’s CIO will not want the buying company’s IT staff in its network, despite acknowledging the buyer needs to test the solution. Nor will there be any willingness to move the seller’s IT staff from core strategic activity to manage this process.

For the buyer it is vital to get access to systems. It is essential to capture vital historic data, from stock movement to asset maintenance history. The CIO needs early access to the new system, to provide confidence in the ability to operate effectively after the transition – any concerns regarding data quality or system obsolescence need to be flagged and addressed early in the process. The buyer is also wary of side-lining key operations people by asking them to undertake testing, training and data assurance.

While both organisations share a common overarching goal, the underlying differences in attitudes, needs and expectations can create serious friction and potentially derail the data assurance process, extend the SPA, even compromise the deal.

Risky Migration

To date processes for managing finding, storing and managing data pre, during and post M&A activity have focused on the needs of the selling company. The seller provided an extract of the SAP system holding the data relevant to the agreed assets and shared that with the buyer. The buyer then had to create configuration and software to receive the data; then transform the data, and then application data migration to provide operational support for key functions such as supplier management.

This approach is fraught with risk. Not only is the buyer left blind to data issues until far too late but the entire process is time consuming. It also typically includes only master data, not the transactional history required, due to the serious challenges and complexity associated with mimicking the chronology of transactional data loading. Data loss, errors and mis-mapping are commonplace – yet only discovered far too late in the process, generally after the M&A has been completed, leaving the buyer’s IT team to wrestle with inaccuracy and system obsolescence.

More recently, different approaches have been embraced, including ‘behind the firewall’ and ‘copy/raze’.  The former has addressed some of the concerns by offering the buyer access to the technical core through a temporary separated network that houses the in-progress build of the buyer’s systems. While this avoids the need to let the buyer into the seller’s data and reduces the migration process as well as minimising errors, testing, training and data assurance, it is flawed. It still requires the build of extract and load programs and also uses only master data for the reasons stated above. It doesn’t address downtime concerns because testing and data assurance is still required. And it still demands the involvement of IT resources in non-strategic work.  Fundamentally, this approach is still a risk to the SPA timeframe – and therefore does not meet the needs of buyer or seller.

The ‘copy/raze’ approach has the benefit of providing transactional data. The seller creates an entire copy and then deletes all data relating to assets not being transferred before transferring to the buyer. However, this model requires an entire portfolio of delete programmes which need to be tested – a process that demands business input. Early visibility of the entire data resources ensures any problems that could affect the SPA can be flagged but the demands on the business are also significant – and resented.

De-risking Migration

A different approach is urgently required. The key is to take the process into an independent location. Under agreement between buyer, seller and data migration expert, the seller provides the entire technical core which is then subjected to a dedicated extract utility. Configuration is based on the agreed key deal assets, ensuring the extraction utility automatically undertakes SAP table downloads of only the data related to these assets – removing any risks associated with inappropriate data access. The process is quicker and delivers better quality assurance. Alternatively, the ‘copy/raze’ approach can be improved by placing the entire SAP system copy into escrow – essentially a demilitarised zone (DMZ) in the cloud – on behalf of both parties.  A delete utility is then used to eradicate any data not related to the deal assets – with the data then verified by the seller before the buyer has any access. Once confirmed, the buyer gains access to test the new SAP system prior to migration.

These models can be used separately and in tandem, providing a data migration solution with no disruption and downtime reduced from weeks to a weekend. The resultant SAP solution can be optimally configured as part of the process, which often results in a reduction in SAP footprint, with the attendant cost benefits.  Critically, because the buyer gains early access to the transaction history, there is no need for extensions for the SPA – while the seller can be totally confident that only the relevant data pertaining to the deal is ever visible to the buyer.

Conclusion

By embracing a different approach to data migration, organisations can not only assure data integrity and minimise the downtime associated with data migration but also reduce the entire timescale. By cutting the data due diligence and migration process from nine months to three, the M&A SPA can be significantly shorter, reducing the costs associated with the transaction while enabling the buyer to confidently embark upon new strategic plans.

Network protection in the hybrid era  

960 640 Guest Post

By Gary Cox, Director of Technology Western Europe at Infoblox  

Since emerging from the worst effects of the pandemic, a mix of in-office and remote work has become common practice for many organisations. Initially seen as a temporary way of easing employees back into the workplace after almost two years working from home, it appears that hybrid work is here to stay for the foreseeable future. As of May 2022, almost a quarter of UK employees worked in a hybrid fashion.

However, in an effort to accommodate the needs of their new hybrid workforce, business leaders have inadvertently increased their organisations’ security and compliance risks. This distributed way of working has dramatically increased the attack surface. It’s perhaps little surprise, then, that according to Infoblox’s 2022 UK State of Security Report, the majority of UK businesses experienced up to five security incidents in a year. The advent of the hybrid era means it’s never been more important for businesses to protect their network – or harder to achieve.

Expanded attack surface

Lockdown forced many organisations to leave their physical offices for good, while others adopted hybrid work where most of their employers worked remotely for at least part of the week. Whatever their preference, companies needed to move their applications and data into the cloud and protect them beyond traditional security solutions like firewalls and VPNs.

But employees logging in over their home WiFi networks, and using personal devices for work purposes – or work devices for personal affairs – meant the attack surface was enormous. As a result, businesses experienced a large number of attacks, many of which resulted in downtime, which can cost organisations considerable financial and reputational damage. Indeed, 43 percent of respondents cited breach damages of $1 million.

Hybrid work was found to provide bad actors with a much wider range of entry points into a company’s network, too. Insecure WiFi, for instance, was reported as being the biggest reason for data breaches, followed by insider access through current or former employees or contractors, and employee-owned endpoints, such as mobile devices and laptops.

Trust nothing

Most people today are aware of the perennial threat of cyberattack, but most can do little to protect themselves beyond just changing the password on their home WiFi router. Organisations must therefore take responsibility for security. This requires them to adopt a zero trust approach, which works on the assumption that attackers have already breached the network.

A multi-layered zero trust framework means all parties must undergo authentication checks at every point, as data flows in and out of an organisation’s network. Doing so will enable the organisation to protect everything that’s connected to that network, as well as limiting the damage in the event that an attacker breaches its defences.

Improved security posture

Organisations everywhere, regardless of industry, should consider how to leverage their existing technology to improve their security posture. For example, solutions that take advantage of DDI – a combination of DNS (Domain Name System), DHCP (Dynamic Host Configuration Protocol), and IPAM (IP Address Management) services, which are already used for device connectivity – to  gain visibility into network activities down to the device level.

In addition to this, DNS security is essential for a zero trust approach. Given that more than 90 percent of threats that enter or leave a network will touch DNS, it is ideal for detecting potential threats. DNS security can help IT teams spot threats that other security tools miss, accelerate threat hunting, and reduce the burden on stretched perimeter defences. It helps them get more value out of third-party security solutions, through real-time, two-way sharing of security event information and through automation, which lowers the costs associated with manual effort and human error.

The COVID crisis has changed the way we work – potentially forever. As long as people continue to work remotely – even only once a week – the use of home WiFi networks will continue to increase the threat of compromise. It’s essential, then, that organisations have sufficiently robust security strategies in place to meet the demands of the hybrid era. A zero trust approach, supported by DDI metadata and DNS security, will help businesses adjust.

We need higher factor protection in the sun AND in the workplace 

960 640 Guest Post

By Steven Hope, CEO of Authlogics

When I was growing up, we didn’t have sunscreen per se, it was more referred to as suntan lotion. It wasn’t part of the summertime ritual it is for many people today and getting repeatedly burned was part of the holiday experience – a price to pay for the tan that announced to everyone you had been on holiday. Even if you did apply it, the level of protection on offer was very low compared to nowadays.

If you are a parent of young children, chances are you make sure that they have sun cream on before they head off to nursery or school for the day (encouraged by regular email reminders). But when it comes to ourselves, many of us are probably a little more lax in our routine, preferring to balance the risk and the chances of getting burnt.

So, if we treat our own safety in this way, it is unsurprising that a risk-based approach filters into other areas of our lives. We all know the five second rule for food that falls on the floor (it isn’t true, just in case you didn’t know), going a few miles an hour over the speed limit, use a password like 123456, or a variation of it, for every account. They are things most people know they probably shouldn’t do, but on balance think ‘what harm will it do?’.

The problem is that these seemingly minor transgressions can and do cause harm, and the more times people ‘offend’ the greater the risk becomes. Of course, with risk comes the potential for ramification and in the case of passwords this means an over exposure to data breaches. Did you know that one in 250 corporate accounts are breached every month? And 80% of data breaches are caused by weak, stolen, or reused passwords! Reducing the risk of getting burned by a breach is similar to protection from the sun – more factors (if applied correctly) combined will increase protection.

The use of multi-factor authentication (MFA) may not be the first thing sunseekers and holiday makers think of when, for example, lounging on the deck of a cruise liner, but for one of the world’s largest operators – Carnival, it is certainly front and centre. This follows widespread reports this week that it has been fined $5 million by New York’s Department of Financial Services for cyber security violations including failing to implement MFA. It was a similar story a few months back when the Information Commissioner’s Office in the UK issued a fine to a company for (amongst other reasons) the lack of MFA.

Yet even for those who do implement MFA, they may well be doing the right thing, but are not doing things right. This is because many MFA solutions only provide a second factor (the first being a legacy password), so knowing that the password is a weak point it really doesn’t amount to true MFA. With this in mind, many consider the use of three factors – something you know (password, PIN or pattern), something you have (laptop or mobile device), and something you are (a biometric) – to be the optimal combination, balancing high levels of security with usability.

Security solutions like sunscreen have evolved in recent years, taking advantage of new technologies to offer far greater protection. However, whilst factor 50 might be perfect for your person, it may be somewhat excessive for your perimeter. Whether your employees are back working in the office, from home or the garden this summer, ensure that they have the right factors for protection.

Sail the digital transformation seas more securely with Zero Trust Access 

960 640 Guest Post

By Tim Boivin, PortSys (pictured) 

Security has long been the boat anchor that drags down innovation – a deadweight that prevents digital transformation efforts from sailing to success.  

With the pandemic, digital transformation efforts accelerated far beyond the horizon of what was thought possible. Those changing tides also gave cyber pirates the opportunity to hack away – torpedoing infrastructure to launch ransomware, phishing, and data exfiltration attacks. 

Unfortunately, too many IT security teams and lines of business still don’t sail in the same direction to find the calm seas that offer more secure digital transformation. As a result, the captains of business frequently consider security as merely sunk costs, instead of the transformative vessel it should be. 

Zero Trust Access (ZTA) sets a new course so your organization can discover greater market treasures. ZTA generates the strategic tailwinds you need for your digital transformation efforts to reach their ultimate destination – competitive advantage. It needs to be considered as a valuable strategic business asset – one that reduces cost, improves productivity, and ultimately drives revenue and profit. 

How? ZTA implements and scales quickly without disrupting your existing infrastructure. It allows users to more securely and seamlessly access local and cloud resources they need to do their jobs from anywhere, improving productivity.  

ZTA accomplishes all this while dramatically reducing threats against your infrastructure. Instead of saying “No!” to anyone who wants to work more productively but requires greater access to do so, the pilots in the IT boathouse say “Yes, but…” – relying on ZTA’s principles of “Never Trust, Always Verify.” That creates a safer journey as your users get closer to your customers, wherever they are. 

Ultimately, ZTA transforms that IT boat anchor that’s been dragging you down into a billowing business mainsail – so you can cruise to competitive advantage. 

Tim Boivin is the marketing director for PortSys, whose enterprise customers around the world use Total Access Control (TAC), its next-gen reverse proxy solution based on Zero Trust. 

The four biggest mistakes in IT security governance

960 640 Guest Post

By Atech

Intelligent IT security and endpoint protection tools are critical components of security governance, and the stakes within today’s threat landscape have never been higher.

A lapse in identity protection or zero trust networks could spell financial disaster for a company. We know that attacks are increasing in sophistication and frequency, and in cost with research showing the average cost of a data breach at an eye-watering $4.24 million.

But what about the other end of the spectrum? How can companies identify and rectify issues in their security governance before they become a problem?

#1 Not realising you are a target with less-than-perfect cloud IT security

Many business leaders using cloud data storage mistakenly believe they are not vulnerable to security breaches from outside attackers. However, this is not the case.

The barriers to entry in becoming a cybercriminal are incredibly low, yet the cost to a brand’s reputation is staggeringly high. Furthermore, fines issued to businesses for not adequately managing customer data are also extremely costly.

Therefore, IT leaders need reliable security governance systems and full visibility over user data, secure identity and access management protocols, encryption, and more.

Businesses can update their IT security playbook by partnering with managed security service providers. By understanding the distinct accreditations that service providers display, solution specialisms can be distinguished from operating procedures, to build a real picture of how the service aligns with your business’ needs. You need to receive timely guidance on the latest cloud security threats and how to mitigate them and how to remediate fast. This can only come with in-near-real-time insights of behaviours and attacks and with the expert support of a security operations centre, carrying an industry recognised accreditation such as CREST.

We outline the biggest mistakes in IT security governance and provide a comprehensive view of today’s cloud security challenges and how best to tackle them as an organisation. Read on to identify the other critical mistakes you could be making.

What vulnerability management should deliver  

960 640 Guest Post

By Eleanor Barlow, SecurityHQ

The purpose of Vulnerability Management is to ensure that organisations can accurately detect, as well as classify and contextualise vulnerabilities, within their organisation, and act on them to reduce the chances of a successful attack by exploiting the vulnerability.

With Vulnerability Management, once vulnerabilities are detected and prioritised, remediation programmes are then put in place to ensure patch management and compliance. The process works on a 24/7 basis, so that analysts are always monitoring the network for new vulnerabilities.

Key Challenges with Vulnerability Management

There are three key issues with supporting inhouse vulnerability management.

First, it often lacks the discipline needed, and the patch management involved, as a team is not usually dedicated to the process. Frequently, the task is pushed onto the IT department who already have their own workload and rarely have the skillset to conduct Vulnerability Management sufficiently.

Second, without the right number of analysts, or the analysts with the right skillset, organisations habitually lack the comprehensive visibility and ability to adequately analyse threats, which puts them at a greater risk.

Third, businesses are financially insensitive to the Vulnerability Management process and do not dedicate the right resources, both in terms of technology, people, and time. This means that vulnerabilities are missed, which leaves businesses open to attack.

Who Needs Vulnerability Management?

No matter the industry or size, all organisations need to have a Vulnerability Management process that provides them with the ability to detect weaknesses within their IT estate. This is necessary to know the risk levels of weaknesses, so that the right actions can be made. This is also a great way to know the order of priority when it comes to patching. You need to be able to analyse threats and the risk exposure, to know what your key concern is, and act on it swiftly in the right order. You don’t want to leave the greatest threat to be patched last.

What Your Vulnerability Management Should Give You

Successful Vulnerability Lifecycle Management means that you can access and prioritise vulnerabilities to reduce the risk of intrusion, exploitation, and data breaches.

Analysts should be able to provide complete visibility of IT assets, perform scans and analyse vulnerability data to offer advice on vulnerability remediation priority to remediate risks.

Outsourcing Vulnerability Management Checklist

If you are outsourcing Vulnerability Management to an MSSP, make sure that the service includes the following:

  • Auditable collaboration.
  • Accurate vulnerability mitigation prioritisation to identify key areas of concern/risk.
  • Intelligent analytic reporting for taking informed decisions.
  • Precise and applicable synopsis with carefully crafted reports provided on a regular basis.
  • Dedicated team who specializes in Vulnerability Management.
  • A team that is available 24/7, every day of the year, with round the clock support for scheduling, monitoring, and reporting on scanning activities. These need to be people not automations!
  • The ability to identify as well as map all risk level to specific threats.
  • Access to labs and the right intelligence to support advisories.

Vulnerability management not only increases a healthy cyber security posture of your business, but it also means that stakeholders have visibility and an understanding of your business attitude towards cyber security. This, in turn, can support ROI, by unleashing the full potential of the technology investments made.

For more information on Vulnerability Management, download data sheet here.

Or, to speak with an analyst, contact the team here.

About SecurityHQ

SecurityHQ is a Global MSSP, that detects, and responds to threats, instantly. As your security partner, we alert and act on threats for you. Gain access to an army of analysts that work with you, as an extension of your team, 24/7, 365 days a year. Receive tailored advice and full visibility to ensure peace of mind, with our Global Security Operation Centres. Utilize our award-winning security solutions, knowledge, people, and process capabilities, to accelerate business and reduce risk and overall security costs.

Facebook: https://www.facebook.com/Sechq

Twitter: https://twitter.com/security_hq

LinkedIn: https://www.linkedin.com/company/securityhq/

Website: https://www.securityhq.com/

Author– Eleanor Barlow

Eleanor is an experienced named author and ghost writer, who specialises in researching and reporting on the latest in cyber security intelligence, developing trends and security insights. As a skilled Content Manager, she is responsible for SecurityHQ’s content strategy. This includes generating and coordinating content for the latest articles, press releases, whitepapers, case studies, website copy, social accounts, newsletters, threat intelligence and more. Eleanor holds a first-class degree in English Literature, and an MA from the University of Bristol. She has strong experience writing in B2B environments, as well as for wider technology-based research projects.

Just one crack – That’s all a hacker needs…

960 640 Guest Post

By Michael Oldham, CEO of PortSys, Inc.

Just one crack. That’s all a hacker needs to find to cripple your organization. Here are three essential steps to take to stop that crack from blowing your infrastructure wide open for bad actors:

Multi-factor authentication (MFA) that includes device validation, certificate checks, Geo IP intelligence and other security policies makes it much harder for hackers to get inside your infrastructure by stealing, guessing or buying credentials.

Close ports across your legacy infrastructure that you opened for cloud, web services, Shadow IT and other applications. This will minimize your exposure to hackers through the internet. Every open port – such as VPN, RDP, MDM, Web Servers, cloud services or infrastructure – is another point of attack hackers gleefully exploit.

A single crack in just one port increases your exposure dramatically.  And your IT team already fights a losing battle trying to manage, maintain, patch and install updates for all those security solutions for those open ports. Closing ports to better secure your organization has a real, direct, significant, long-lasting business benefit.

Segmentation of resources limits the damage anyone can do inside your infrastructure in the event you are breached. Everyone is committed to keeping hackers out, but the truth is they still get in, or you may even be a victim of an insider attack.

Segmentation prevents bad actors from pivoting once they are inside to gain access to other parts of your infrastructure, where they can steal or lock up data. With segmentation, those compartmentalized resources aren’t accessible without proper authentication.

Another benefit of segmentation is that it doesn’t have to just be at the network level. Segmentation can be done at the resource level through intelligent policies that provide access to resources only under specific circumstances.

These three steps help prevent just one crack – or several – that puts your infrastructure at risk to ensure much greater security across your enterprise. And that’s good for any business.

Michael Oldham is CEO of PortSys, Inc., whose Total Access Control (TAC) Zero Trust solution is used by enterprise organizations around the world to secure their infrastructure.