data Archives - Cyber Secure Forum | Forum Events Ltd
Posts Tagged :

data

How businesses can protect their most valuable asset this Data Privacy Day and beyond

960 640 Stuart O'Brien

With last weekend marking the 18th Data Privacy Day, we sat down with some of the industry’s experts to find out more about why this event is still so important and how organisations can get one step ahead when it comes to protecting their increasingly precious data. Here’s what they had to say…

Samir Desai, Vice President at GTT  

“This year’s Data Privacy Day provides us with yet another reminder of just how important it is for businesses to protect their most valuable asset. However, this is something that, unfortunately, has also never been more difficult.  

“The rapid adoption of cloud computing, IoT/IIoT, mobile devices and remote work has increased both the size and complexity of the networking landscape and cybercriminals are taking advantage of this. Alongside common threats – such as phishing – businesses today must defend against a whole new host of potential risks, such as how generative AI can potentially super-charge phishing attempts by making it easier and faster for bad actors to craft convincing content.  

“To ensure data security for cloud-based apps while still providing reliable connectivity for hybrid workplaces and remote workers, the modern enterprise needs to invest in the right solutions. This may require further collaboration with managed security and service partners to identify and implement the right technologies to protect the ever-expanding perimeter. 

“For example, a Zero-trust networking approach which also combines network security and software-defined connectivity  into a single cloud-based service experience, could be transformative. It’s ‘always-on’ security capabilities means that data is protected, regardless of where resources or end-users reside across the enterprise environment.” 

Ajay Bhatia, Global VP & GM of Data Compliance and Governance, Veritas

“Ironically, Data Privacy Day is a reminder that data privacy isn’t something a business can achieve in a single day at all. Far from that, it’s a continual process that requires vigilance 24/7/365. Top of mind this year is the impact artificial intelligence (AI) is having on data privacy. AI-powered data management can help improve data privacy and associated regulatory compliance, yet bad actors are using generative AI (GenAI) to create more sophisticated attacks. GenAI is also making employees more efficient, but it needs guardrails to help prevent accidentally leaking sensitive information. Considering these and other developments, data privacy in 2024 is more important than ever.” 

Martin Hodgson, Director Northern Europe at Paessler AG

“As our reliance on data continues to grow, protecting it and ensuring the only those who we trust have access to it has never been more important.  

 “Many businesses assume their IT infrastructure is sufficiently protected by a reliable firewall and an up-to-date virus scanner. However, cyber criminals are continually developing more sophisticated methods of accessing company systems and getting hold of sensitive data. Some of these methods – such as trojans – will often only be recognised when it’s already too late.  

“In order to get ahead and avoid the financial and reputational losses associated with such attacks, businesses need to invest in comprehensive security approaches which protect the entire infrastructure. Realtime IT Documentation alongside a network monitoring system – which enables a business to keep track of all devices and systems, regardless of location – can help to spot the early warnings signs of an attack and enable business to get on the front foot when it comes to protecting their increasingly valuable data.”   

Mike Loukides, Vice President of Emerging Tech at O’Reilly:

“How do you protect your data from AI? After all, people type all sorts of things into their ChatGPT prompts. What happens after they hit “send”? 

“It’s very hard to say. While criminals haven’t yet taken a significant interest in stealing data through AI, the important word is “yet.” Cybercriminals have certainly noticed that AI is becoming more and more entrenched in our corporate landscapes. AI models have huge vulnerabilities, and those vulnerabilities are very difficult (perhaps impossible) to fix. If you upload your business plan or your company financials to ChatGPT to work on a report, is there a chance that they will “escape” to a hostile attacker? Unfortunately, yes. That chance isn’t large, but it’s not zero. 

“So here are a few quick guidelines to be safe: 

  • Read the fine print of your AI provider’s policies. OpenAI claims that they will not use enterprise customers’ data to train their models. That doesn’t protect you from hostile attacks that might leak your data, but it’s a big step forward. Other providers will eventually be forced to offer similar protections.
  • Don’t say anything to an AI that you wouldn’t want leaked. In the early days of the Internet, we said “don’t say anything online that you wouldn’t say in public.” That rule still applies on the Web, and it definitely applies to AI.
  • Understand that there are alternatives to the big AI-as-a-service providers (OpenAI, Microsoft, Google, and a few others). It’s possible to run several open source models entirely on your laptop; no cloud, no Internet required once you’ve downloaded the software. The performance of these models isn’t quite the equal of the latest GPT, but it’s impressive. Llamafile is the easiest way to run a model locally. Give it a try

“I’m not suggesting that anyone refrain from using AI. So far, the chances of your private data escaping are small. But it is a risk. Understand the risk, and act accordingly.” 

Attila Török, Chief Security Officer at GoTo:

“As new ways of working and engaging with tech continue to expand the vulnerability landscape and create new pathways for hackers, you’d be hard-pressed to find an IT leader whose number one concern wasn’t cybersecurity. 

 “Bolstering cyber hygiene to stave off threats and protect sensitive data is a top agenda item, especially in a working world where hybrid, dispersed and remote-centric teams are commonplace. In “2024 businesses should be firing on all cylinders to scale up employee security, utilise zero trust products, continue to enforce a strong acceptable use policy (AUP), and move toward passwordless authentication. These are simple yet powerful ways we can improve and modernise current practices to ensure that cyber threats can’t breach company systems. 

“Cybersecurity is a top priority for all businesses—small and large. CTOs, working with CISOs, are responsible for protecting their business, customers, and employees from cyberattacks and data breaches. In 2024, CTOs must continue implementing robust security measures and invest in new cybersecurity technologies, including zero-trust architectures (ZTAs).”

Keiron Holyome, VP UKI and Emerging markets, BlackBerry Cybersecurity 

“AI continues to be a game-changer in data privacy and protection for businesses as well as individuals. We have entered a phase where AI opens a powerful new armoury for those seeking to defend data. When trained to predict and protect, it is cybersecurity’s most commanding advantage. But it also equips those with malicious intent. Its large scale data collection in generative business and consumer applications raises valid concerns for data and communication privacy and protection that users need to be alert to and mitigate.

“A big question at the moment is how legislation can be pervasive enough to offer peace of mind and protection against the growing generative AI threats against data privacy, while not hindering those with responsibility for keeping data safe. BlackBerry’s research found that 92% of IT professionals believe governments have a responsibility to regulate advanced technologies, such as ChatGPT…though many will acknowledge that even the most watertight legislation can’t change reality. That is, as the maturity of AI technologies and the hackers’ experience of putting it to work progress, it will get more and more difficult for organisations and institutions to raise their defences without using AI in their protective strategies.”

Photo by Jason Dent on Unsplash

Health Tech and Personal Data: What ‘Powered by Data’ means for healthcare tech

960 640 Stuart O'Brien

By Lucy Pegler, partner, and Noel Hung, solicitor, at independent UK law firm Burges Salmon

In June 2023, the NHS launched the ‘Powered by Data’ campaign to demonstrate how use of health data delivers benefits for patients and society. The campaign draws on examples of how the responsible use of patient data can support innovation in the healthcare sector from developing new tools to support patients and helping to understand how to deliver better care.

Although framed in the context of public health services, the concept of ‘Powered by Data’ is applicable more widely to the healthcare sector. Public and private providers of healthcare whether in-person in healthcare settings or through increasingly innovative digital services, will collect data in every interaction with their patients or clients. The responsible and trustworthy use of patient data is fundamental to improve care and deliver better, safer treatment to patients. 

What is health data?

The Data Protection Act 2018 (“DPA”) defines “data concerning health” as personal data relating to the physical or mental health of an individual, including the provision of health care services, which reveals information about their health status.

Healthcare organisations that typically manage data concerning health have an additional obligation to also maintain “genetic data” and “biometric data” to a higher standard of protection than personal data generally.

If you process (e.g. collect, store and use) health data in the UK, UK data protection laws will apply. Broadly speaking, UK data protection law imposes a set of obligations in relation to your processing of health data. These include:

  • demonstrating your lawful basis for processing health data – health data is considered special category personal data meaning that for the purposes of the UK General Data Protection Regulation, healthcare providers must demonstrate both an Article 6 and an Article 9 condition for processing data. Typically, for the processing of health data, one of the following three conditions for processing must apply:
  1. the data subject must have given “explicit consent”;
  2. processing is necessary for the purposes of preventive or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment or the management of health or social care systems and services; or
  3. processing is necessary for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health or ensuring high standards of quality and safety of healthcare and of medicinal products or medical devices.
  • transparency – being clear, open, and honest with data subjects about who you are, and how and why you use their personal data.
  • data protection by design and default – considering data protection and privacy issues from the outset and integrating data protection into your processing activities and organisation-wide business practices.
  • technical and organisational measures– taking appropriate and proportionate technical and organisational measures to manage the risks to your systems. These measures must ensure a level of security appropriate to the risk posed.
  • data mapping – understanding how data is used and held in your organisation (including carrying out frequent information audits).
  • use of data processors – only engaging another processor (a ‘sub-processor’) after receiving the controller’s prior specific or general written authorisation.

The NHS and the adult social care system have stated their commitment to upholding the public’s rights in law, including those enshrined in the DPA 2018 and the common law duty of confidentiality. These obligations extend to healthcare providers, whether NHS, local authority and private, whether through online, digital healthcare solutions or more traditional in-person settings.

The Caldicott principles

The Caldicott principles were first introduced in 1997 and have since expanded to become a set of good practice guidelines for using and keeping safe people’s health and care data.

There are eight principles that apply, and all NHS organisations and local authorities which provide social services must appoint a Caldicott guardian in place to support with keeping people’s information confidential and maintaining certain standards. Private and third sector organisations that do not deliver any publicly funded work do not need to appoint a Caldicott guardian.

However, the UK Caldicott Guardian Council (“UKCGC”) considers it best practice for any organisation that processes confidential patient information to have a Caldicott Guardian, irrespective of how they are funded.

The role of the Caldicott guardian includes ensuring that health and care information is used ethically, legally and appropriately. The principles also allow for the secure transfer of sensitive information across other agencies, for example the Social Services Education, Police and Judicial System. Further details of the principles can be found here.

The Common Law Duty of Confidentiality (“CLDC”)

Under the CLDC, information that has been obtained in confidence should not be used or disclosed further, unless the individual who originally confided such information is aware or subsequently provides their permission.

All NHS Bodies and those carrying out functions on behalf of the NHS have a duty of confidence to service users and a duty to support professional and ethical standards of confidentiality. This duty of confidence also extends to private and third-sector organisations providing healthcare services.

NHS-specific guidance

Providers who work under the NHS Standard Contract may also utilise the NHS Digital Data Security and Protection Toolkit to measure their performance against the National Data Guardian’s 10 data security standards. All organisations that have access to NHS patient data and systems must use this toolkit to provide assurance that they are practising good data security and that personal information is handled appropriately.

Furthermore, the toolkit contains a breach assessment grid to support with deciding the severity of the breach using a risk score matrix to determine whether the breach needs to be reported, which supports with reporting security incidents to the ICO, the Department of Health and Social Care and NHS England.

Health and Care Act 2022

As integrated care systems continue to develop, the new Health and Care Act 2022 introduces significant reforms to the organisation and delivery of health and care services in England. In particular, the Act makes numerous changes to NHS England (which has now subsumed NHS Digital) to require data from private health care providers when it considers it necessary or expedient for it to have such data to comply with a direction from the Secretary of State to establish an information system.

The Act also allows the Secretary of State for Health and Social Care to mandate standards for processing of information to both private and public bodies that deliver health and adult social care, so that data flows through the system in a usable way, and that when it is accessed or provided (for whatever purpose) it is in a standard form, both readable by and consistently meaningful to the user or recipient.

Benefits of sharing personal data  

Healthcare professionals have a legal duty to share information to support individual care (unless the individual objects). This is set out in the Health and Social Care Act 2012 and the Health and Social Care (Quality and Safety) Act 2015. The sharing of health and social data between NHS organisations and pharmacies could better transform the way healthcare services are provided as well as grant continuity between the various providers. Having a single point of contact with patients is what makes the healthcare system in the UK distinct from other systems around the world. In addition, patient information could be used for research purposes as well as in the development and deployment of data-driven technologies.

A note on cyber security

Given the sensitive nature of health data and patient information, healthcare providers are particularly susceptible to data breaches. In response to the UK government’s cyber security strategy to 2030, the Department of Health & Social Care published a policy paper entitled ‘A cyber resilient health and adult social care system in England: cyber security strategy to 2023’ in March 2023.

Cyber resilience is critical in the healthcare sector and providers must be able to prevent, mitigate and recover from cyber incidents. Strong cyber resilience dovetails with providers’ obligations under UK GDPR to maintain appropriate technical and organisational measure. For public providers and those providing into the public sector, a deep awareness of the DHSC’s Strategy is critical.

Consequences for failure to comply

Whilst there is a lot of focus on the maximum fines under UK GDPR of £17.5 million or 4% of the company’s total worldwide annual turnover (whichever is higher), in the context of the healthcare sector, there is also significant reputational risk in terms of both an organisation’s relationship with its patients and with its customers and supply chain. Organisations should also be aware of their potential liability resulting from claims from patients and potential contractual liability and consequences.

Photo by Irwan @blogcious on Unsplash

For data privacy, access is as vital as security 

960 640 Guest Post

By Jaeger Glucina, MD and Chief of Staff, Luminance 

If you’re in the UK, you could hardly have missed the story this summer about Nigel Farage’s public showdown with the specialist bank Coutts. What started as an apparent complaint about a lack of service being provided to Farage quickly became a significant political talking point and, ultimately, resulted in the CEO of the NatWest-owned bank resigning his position.

However, if your work sees you taking responsibility for security, compliance, and business continuity, you may need to take stock of how this story highlights an approaching risk factor that all companies need to be aware of. While the details of Coutt’s decision to drop Farage as a customer were being launched onto the newspapers’ front pages, the actual way in which Farage obtained that information remained very much a secondary story.

Those details were obtained when Farage lodged a data subject access request, or ‘DSAR’, with Coutts. This legal mechanism, introduced as part of the EU’s General Data Protection Regulation, compels organisations to identify, compile, and share every piece of information that they hold relating to an individual. This could range from basic data like names and addresses in a customer database to internal email or text conversations pertaining to them.

The purpose, as with analogous legislation like the California Consumer Privacy Act, is to tip the scales of power around matters of data and privacy back in favour of the consumer. To achieve that, there is real regulatory muscle to ensure that DSARs are acted on. Upon receipt, organisations must respond within thirty days, and non-compliance can carry a fine of up to 4% of the business’s annual global turnover.

The reputational damage that a DSAR could trigger for some businesses should, by now, be readily apparent. Even benign requests can pose a serious challenge to an organisation’s legal resource.

While the potentially punitive results of non-compliance makes DSARs a priority issue, mounting a response is not easy as you might think. The breadth of the request demands an exhaustive and wide-ranging search through information systems, including records of Slack messages and video calls as well as emails, documents, spreadsheets, and databases. At the same time, of course, our usage of such systems is ever-expanding. Every new productivity tool in an organisation’s arsenal will represent a potential landing point for sensitive data which needs to be collated, analysed and appropriately redacted in a DSAR process.

You can imagine that for legal teams this is an onerous workload which saps capacity from higher-value areas of work that drive business growth. Worse, it is a highly labour-intensive, repetitive process which few legal professionals would ideally choose to engage in. Many external firms won’t take DSAR cases on, and if one can be found the fees will likely run to tens of thousands of pounds.

All of that adds up to a growing need for a new kind of data discoverability: not just a way for businesses to oversee data siloes, but to analyse and draw from them in a highly specific way which meets strict legal criteria.

Clearly, the repetitive and precise nature of the task makes it a perfect candidate for automation. With AI, teams can rapidly cull datasets down to just those items which are likely to be relevant before identifying any personal data which needs to be excluded or redacted. In one recent rollout of the technology, this resulted in UK-based technology scale-up, proSapient, halving the time taken to respond to a DSAR and avoiding £20k in costs while maintaining the robust level of detail which GDPR compliance demands.

Any data professional out there knows that a proliferation of personal data residing in systems is an almost inevitable consequence of our modern working practices: digital tools underpin our productivity, and information about people, whether they are customers, clients, or employees, is relevant to almost any process.

Anecdotally, we know that whenever a story involving DSARs hits the headlines, businesses experience a spike of requests. The GDPR may now be half a decade old, but awareness of how it can be leveraged will only continue to grow – far past the capacity of existing tools and team structures to cope.

That means that empowering legal teams with the tools they need manage this new data reality is of paramount importance, both to safeguard the organisation’s future resilience and continuity, and to enable them to focus on delivering the levels of productivity expected from them.

Where does GenAI fit into the data analytics landscape?

960 640 Guest Post

Recently, there has been a lot of interest and hype around Generative Artificial Intelligence (GenAI), such as ChatGPT and Bard. While these applications are more geared towards the consumer, there is a clear uptick in businesses wondering where this technology can fit into their corporate strategy. James Gornall, Cloud Architect Lead, CTS explains the vital difference between headline grabbing consumer tools and proven, enterprise level GenAI…

Understanding AI

Given the recent hype, you’d be forgiven for thinking that AI is a new capability, but in actual fact, businesses have been using some form for AI for years – even if they don’t quite realise it.

One of the many applications of AI in business today is in predictive analytics. By analysing datasets to identify patterns and predict future outcomes, businesses can more accurately forecast sales, manage inventory, detect fraud and resource requirements.

Using data visualisation tools to make complex data simpler to understand and more accessible, decision-makers can easily spot trends, correlations and outliers, leading them to make better-informed data-driven decisions, faster.

Another application of AI commonly seen is to enhance customer service through the use of AI-powered chatbots and virtual assistants that meet the digital expectations of customers, by providing instant support when needed.

So what’s new?

What is changing with the commercialisation of GenAI is the ability to create entire new datasets based on what has been learnt previously. GenAI can use the millions of images and information it has searched to write documents and create imagery at a scale never seen before. This is hugely exciting for organisations’ creative teams, providing unprecedented opportunities to create new content for ideation, testing, and learning at scale. With this, businesses can rapidly generate unique, varied content to support marketing and brand.

The technology can use data on customer behaviour to deliver quality personalised shopping experiences. For example, retailers can provide unique catalogues of products tailored to an individuals’ preferences, to create a totally immersive, personalised experience. In addition to enhancing customer predictions, GenAI can provide personalised recommendations based on past shopping choices and provide human-like interactions to enhance customer satisfaction.

Furthermore, GenAI supports employees by automating a variety of tasks, including customer service, recommendation, data analysis, and inventory management. In turn, this frees up employees to focus on more strategic tasks.

Controlling AI

The latest generation of consumer GenAI tools have transformed AI awareness at every level of business and society. In the process, they have also done a pretty good job of demonstrating the problems that quickly arise when these tools are misused. From users who may not realise the risks associated with inputting confidential code into ChatGPT, completely unaware that they are actually leaking valuable Intellectual Property (IP) that could be included in the chatbot’s future responses to other people around the world, to lawyers fined for using fictitious ChatGPT generated research in a legal case.

While this latest iteration of consumer GenAI tools is bringing awareness to the capabilities of this technology, there is a lack of education around the way it is best used. Companies need to consider the way employees may be using GenAI that could potentially jeopardise corporate data resources and reputation.

With GenAI set to accelerate business transformation, AI and analytics are rightly dominating corporate debate, but as companies adopt GenAI to work alongside employees, it is imperative that they assess the risks and rewards of cloud-based AI technologies as quickly as possible.

Trusted Data Resources

One of the concerns for businesses to consider is the quality and accuracy of the data provided by GenAI tools. This is why it is so important to distinguish between the headline grabbing consumer tools and enterprise grade alternatives that have been in place for several years.

Business specific language is key, especially in jargon heavy markets, so it is essential that the GenAI tool being used is trained on industry specific language models.

Security is also vital. Commercial tools allow a business to set up its own local AI environment where information is stored inside the virtual safety perimeter. This environment can be tailored with a business’ documentation, knowledge bases and inventories, so the AI can deliver value specific to that organisation.

While these tools are hugely intuitive, it is also important that people understand how to use them effectively.

Providing structured prompts and being specific in the way questions are asked is one thing, but users need to remember to think critically rather than simply accept the results at face value. A sceptical viewpoint is a prerequisite – at least initially. The quality of GenAI results will improve over time as the technology evolves and people learn how to feed valid data in, so they get valid data out. However, for the time being people need to take the results with a pinch of salt.

It is also essential to consider the ethical uses of AI.

Avoiding bias is a core component of any Environmental, Social and Governance (ESG) policy. Unfortunately, there is an inherent bias that exists in AI algorithms so companies need to be careful, especially when using consumer level GenAI tools.

For example, finance companies need to avoid algorithms running biassed outcomes against customers wanting to access certain products, or even receiving different interest rates based on discriminatory data.

Similarly, medical organisations need to ensure ubiquitous care across all demographics, especially when different ethnic groups experience varying risk factors for some diseases.

Conclusion

AI is delivering a new level of data democratisation, allowing individuals across businesses to easily access complex analytics that has, until now, been the preserve of data scientists. The increase in awareness and interest has also accelerated investment, transforming the natural language capabilities of chatbots, for example. The barrier to entry has been reduced, allowing companies to innovate and create business specific use cases.

But good business and data principles must still apply. While it is fantastic that companies are now actively exploring the transformative opportunities on offer, they need to take a step back and understand what GenAI means to their business. Before rushing to meet shareholder expectations for AI investment to achieve competitive advantage, businesses must first ask themselves, how can we make the most of GenAI in the most secure and impactful way?

The role of network cameras as sensors to support digital transformation

960 640 Guest Post

Axis Communications’ Linn Storäng explains how high-quality video data on open IT architecture might support the digital transformation of business functions…

To think of the network camera – an essential part of any company’s security infrastructure – as being a tool for security purposes alone could be a missed opportunity. Today’s cameras are versatile IoT devices capable of offering a number of additional benefits through analysis of high-quality video data that can be used to accelerate and enhance digital transformation efforts. The key to discovering that potential is to refocus; if a camera can see it, your systems can act upon it.

Digital video is not simply about security – it is also an extraordinary source of data. Over the years network cameras have been bolstered by higher grade image quality, improved bandwidth efficiency, and more powerful processing both on board and in the cloud, while the addition of advanced analytics and AI capabilities adds a wealth of functionality. And when wedded to open IT architecture that paves the way for ease of integration, a world of smarter possibilities awaits.

Benefits of the camera as a sensor

Thinking of network cameras as sensors and applying analytics to video data can help identify trends that develop over time, or highlight issues and insights in real time, all without requiring a human to constantly survey the output. Properly applied, that data may also be valuable in building predictive models which can help improve future efficiency or discover brand new directions for your IT provision or the business.

Cameras can provide secondary security reassurance, monitor critical systems for temperature changes, ensure production lines are running efficiently and even detect the early signs of an outbreak of fire. Ease of deployment and integration means cameras can be employed on a smaller scale, keeping a digital eye on otherwise difficult-to-access equipment, and simultaneously on a wider scale, using their vast field of view to monitor large areas.

Exploiting camera data in this way may reduce complexity, removing the need to install and administer additional banks of sensors. It may, conversely, help existing granular sensor data to be enhanced and contextualised. It can help the introduction of novel functions, adding value to digital transformation efforts without incurring further costs.

Important considerations for platform selection

However, the picture isn’t, perhaps, quite that clear. The ubiquity of cameras means they are theoretically straightforward to integrate into a digital transformation effort, but that all-important data must be reachable in a safe way. The camera’s hardware, firmware and ecosystem need to be flexible enough to support whatever application your business wishes to build – open enough to be useful, but robust enough not to present a risk.

They must also be accessible by all entities that need that access – be they personnel, third-party integrations, or bespoke applications – opening as many data points as possible without exposing security holes. To be effective, cameras should be deployed as part of an overall IT infrastructure rather than being siloed into merely acting as part of a security function or chosen without consideration of additional use cases. This also helps to guarantee that such hardware can be properly managed and updated over its lifetime.

To that end, today’s cameras should be backed by dedicated support whenever and wherever it is required. These devices must be cybersecure. Firmware that keeps pace with the latest threats is crucial when using IoT devices in a corporate setting, but rolling out a firmware update to a network of hundreds or perhaps thousands of mission-critical cameras is no trivial task.

Good network cameras are those which offer administrative control, a considered upgrade path which suits your business but can react quickly to new threats, and the tools to make applying such upgrades as simple as possible – all while leaving third-party integrations intact and unharmed. And these decisions must be made when considering product lifecycles too; long-term support and sustainability may be one of the most vital properties of a network camera given the expense and upheaval of purchasing and installing hardware.

A new generation of IP cameras

The network camera space has grown to meet the needs of its traditional users and this new set of wider IT use cases. Cameras can now routinely integrate neatly with, for example, DCIM systems to help the creation of bespoke applications. They can include features like visual overlays which make alerts and analytics clear and concise. Today’s cameras are built to be lean, with technology designed to minimise energy use, demand minimal network bandwidth, and even reduce the load on cloud servers by performing complex computation on the edge – all while simplifying maintenance through secure tools which smooth the process of managing large networks of IoT devices.

Cameras should not only be included in your digital transformation plan but should also become a core part of it. The potential of digital video, and the number of solutions edge processing and hardware integration can offer, is growing fast. Video analytics offers accurate, fast results – and even if your transformation is in the early stages, building a strong infrastructure now opens doors for a smarter future.

Learn more about how network cameras can support your digital transformation agenda

About the Author – Linn Storäng, Regional Director Northern Europe, Axis Communications
Linn has held senior positions within strategic roles at Axis Communications for the past 5 years, recently becoming Regional Director for Northern Europe. Linn is a strategic thinker who likes to be very closely involved with business and operations processes, leading by example and striving to empower colleagues with her positivity and passion for innovation. Linn relishes the ongoing challenge to find new ways to meet the needs of her customers, and strives to forge ever stronger relationships with partner businesses. Prior to joining Axis, Linn held senior sales and account management roles within the construction industry.

The secrets of no drama data migration

960 640 Guest Post

With Mergers, Acquisitions and Divestments at record levels, the speed and effectiveness of data migration has come under the spotlight. Every step of this data migration process raises concerns, especially in spin-off or divestment deals where just one part of the business is moving ownership. 

What happens if confidential information is accessed by the wrong people? If supplier requests cannot be processed? If individuals with the newly acquired operation have limited access to vital information and therefore do not feel part of the core buyer’s business? The implications are widespread – from safeguarding Intellectual Property, to staff morale, operational efficiency, even potential breach of financial regulation for listed companies.

With traditional models for data migration recognised to be high risk, time consuming and can potentially derail the deal, Don Valentine, Commercial Director at Absoft explains the need for a different approach – one that not only de-risks the process but adds value by reducing the time to migrate and delivering fast access to high quality, transaction level data…

Recording Breaking

2021 shattered Merger & Acquisition (M&A) records – with M&A volume hitting over$5.8 trillion globally. In addition to whole company acquisitions, 2021 witnessed announcements of numerous high-profile deals, from divestments to spin-offs and separations. But M&A performance history is far from consistent. While successful mergers realise synergies, create cost savings and boost revenues, far too many are derailed by cultural clashes, a lack of understanding and, crucially, an inability to rapidly combine the data, systems and processes of the merged operations.

The costs can be very significant, yet many companies still fail to undertake the data due diligence required to safeguard the M&A objective. Finding, storing and migrating valuable data is key, before, during, and post M&A activity. Individuals need access to data during the due diligence process; they need to migrate data to the core business to minimise IT costs while also ensuring the acquired operation continues to operate seamlessly.  And the seller needs to be 100% confident that only data pertinent to the deal is ever visible to the acquiring organisation.

Far too often, however, the data migration process adds costs, compromises data confidentiality and places significant demands on both IT and business across both organisations.

Data Objectives

Both buyer and seller have some common data migration goals. No one wants a long-drawn-out project that consumes valuable resources. Everyone wants to conclude the deal in the prescribed time. Indeed, completion of the IT integration will be part of the Sales & Purchase Agreement (SPA) and delays could have market facing implications. Companies are justifiably wary of IT-related disruption, especially any downtime to essential systems that could compromise asset safety, production or efficiency; and those in the business do not want to be dragged away from core operations to become embroiled in data quality checking exercises.

At the same time, however, there are differences in data needs that can create conflict. While the seller wants to get the deal done and move on to the next line in the corporate agenda, the process is not that simple. How can the buyer achieve the essential due diligence while meeting the seller’s need to safeguard non-deal related data, such as HR, financial history and sensitive commercial information? A seller’s CIO will not want the buying company’s IT staff in its network, despite acknowledging the buyer needs to test the solution. Nor will there be any willingness to move the seller’s IT staff from core strategic activity to manage this process.

For the buyer it is vital to get access to systems. It is essential to capture vital historic data, from stock movement to asset maintenance history. The CIO needs early access to the new system, to provide confidence in the ability to operate effectively after the transition – any concerns regarding data quality or system obsolescence need to be flagged and addressed early in the process. The buyer is also wary of side-lining key operations people by asking them to undertake testing, training and data assurance.

While both organisations share a common overarching goal, the underlying differences in attitudes, needs and expectations can create serious friction and potentially derail the data assurance process, extend the SPA, even compromise the deal.

Risky Migration

To date processes for managing finding, storing and managing data pre, during and post M&A activity have focused on the needs of the selling company. The seller provided an extract of the SAP system holding the data relevant to the agreed assets and shared that with the buyer. The buyer then had to create configuration and software to receive the data; then transform the data, and then application data migration to provide operational support for key functions such as supplier management.

This approach is fraught with risk. Not only is the buyer left blind to data issues until far too late but the entire process is time consuming. It also typically includes only master data, not the transactional history required, due to the serious challenges and complexity associated with mimicking the chronology of transactional data loading. Data loss, errors and mis-mapping are commonplace – yet only discovered far too late in the process, generally after the M&A has been completed, leaving the buyer’s IT team to wrestle with inaccuracy and system obsolescence.

More recently, different approaches have been embraced, including ‘behind the firewall’ and ‘copy/raze’.  The former has addressed some of the concerns by offering the buyer access to the technical core through a temporary separated network that houses the in-progress build of the buyer’s systems. While this avoids the need to let the buyer into the seller’s data and reduces the migration process as well as minimising errors, testing, training and data assurance, it is flawed. It still requires the build of extract and load programs and also uses only master data for the reasons stated above. It doesn’t address downtime concerns because testing and data assurance is still required. And it still demands the involvement of IT resources in non-strategic work.  Fundamentally, this approach is still a risk to the SPA timeframe – and therefore does not meet the needs of buyer or seller.

The ‘copy/raze’ approach has the benefit of providing transactional data. The seller creates an entire copy and then deletes all data relating to assets not being transferred before transferring to the buyer. However, this model requires an entire portfolio of delete programmes which need to be tested – a process that demands business input. Early visibility of the entire data resources ensures any problems that could affect the SPA can be flagged but the demands on the business are also significant – and resented.

De-risking Migration

A different approach is urgently required. The key is to take the process into an independent location. Under agreement between buyer, seller and data migration expert, the seller provides the entire technical core which is then subjected to a dedicated extract utility. Configuration is based on the agreed key deal assets, ensuring the extraction utility automatically undertakes SAP table downloads of only the data related to these assets – removing any risks associated with inappropriate data access. The process is quicker and delivers better quality assurance. Alternatively, the ‘copy/raze’ approach can be improved by placing the entire SAP system copy into escrow – essentially a demilitarised zone (DMZ) in the cloud – on behalf of both parties.  A delete utility is then used to eradicate any data not related to the deal assets – with the data then verified by the seller before the buyer has any access. Once confirmed, the buyer gains access to test the new SAP system prior to migration.

These models can be used separately and in tandem, providing a data migration solution with no disruption and downtime reduced from weeks to a weekend. The resultant SAP solution can be optimally configured as part of the process, which often results in a reduction in SAP footprint, with the attendant cost benefits.  Critically, because the buyer gains early access to the transaction history, there is no need for extensions for the SPA – while the seller can be totally confident that only the relevant data pertaining to the deal is ever visible to the buyer.

Conclusion

By embracing a different approach to data migration, organisations can not only assure data integrity and minimise the downtime associated with data migration but also reduce the entire timescale. By cutting the data due diligence and migration process from nine months to three, the M&A SPA can be significantly shorter, reducing the costs associated with the transaction while enabling the buyer to confidently embark upon new strategic plans.

Data security to drive IT security market to new highs

960 640 Stuart O'Brien

The global cyber security market is estimated to record a CAGR of 10.5% between 2022 and 2032, driven by surging awareness among internet users about the sensitivity of their private data and impending legal actions prompting businesses to secure their online data by following the best practices.

That’s according to a report from Future Market Insights, which says increasing complexities associated with manual identification of vulnerabilities, frauds and threats encourage organisations to fool-proof their data. Owing to these phishing and data threats, the adoption of cyber safety solutions is estimated to grow at a ‘remarkable’ rate.

Key Takeaways

  • The demand for cyber-security solutions has increased over the past decade due to a surge in online threats such as computer intrusion (hacking), virus deployment and denial of services. Due to the expansion in computer connectivity, it has become of utmost importance to keep your data safe from intruders and impersonators.
  • Increased government regulations on data privacy are one of the key drivers of the cyber security market. In addition to that, accelerating cyber threats and an increasing number of data centers are the biggest revenue generators for the cyber security market.
  • There are various benefits offered by the cyber security market such as improved security of cyberspaces, increased cyber safety and faster response time to the national crisis. Backed by these benefits, the cyber security market is projected to showcase skyrocketing growth over the forecast years (2022-2032).
  • All in all, the cyber security market across the globe is a multi-billion market and is expected to show substantial growth in CAGR, from 2022 to 2032. There is a significant increase in the cyber security market because cyber security solutions increase cyber speed and offer a number of options to save data.
  • Large investments in the global cyber security market by various countries such as the US, Canada, China and Germany are witnessed owing to the expansion in computer interconnectivity and dramatic computing power of government networks.

IBM International, Booz Allen Hamilton, Cisco, Lockheed Martin, McAfee, CA Technologies, Northrop Grumman, Trend Micro, Symantec, and SOPHOS are some of the key companies profiled in the full version of the report.

Key players in the cyber security market are consciously taking steps concerning their information security, which is inspiring other businesses to follow in their footsteps and stay updated with the latest IT security strategy.

The adoption of cyber safety solutions is anticipated to grow impeccably as businesses look at curbing their steep financial losses arising from cyberattacks.

Salesforce security: 5 ways your data could be exposed

960 640 Guest Post

By Varonis

Salesforce is the lifeblood of many organizations. One of its most valuable assets-the data inside-is also its most vulnerable. With countless permission and configuration possibilities, it’s easy to leave valuable data exposed.

That, coupled with the fact that most security organizations aren’t very familiar or involved with Salesforce’s administration, opens organizations up to massive risk.

Here are five things every security team should know about their Salesforce security practices to effectively gauge and reduce risk to data. 

5 Questions You Should Ask:

  1. How many profiles have “export” permissions enabled? 

Exporting data from Salesforce makes it a lot easier for someone to steal information like leads or customer lists. To protect against insider threats and data leaks, export capabilities should be limited to only the users who require it.

  1. How many apps are connected to Salesforce via API? 

Connected apps can bring added efficiency to Salesforce, but they can also introduce added risk to your Salesforce security.

If a third-party app is compromised, it could expose internal Salesforce data. You should know exactly what’s connected to your Salesforce instance and how to ensure that connection doesn’t expose valuable information.

  1. How many external users have access to Salesforce? 

External users, like contractors, are often granted access to Salesforce. Surprisingly, 3 out of 4 cloud identities that belong to external contractors remain active after they leave the organization.

Salesforce security teams should ensure all contractors are properly offboarded from all SaaS apps to prevent data from being exposed.

  1. How many privileged users do you have? 

Privileged users have a lot of power within Salesforce. They can make configuration changes that have dramatic effects on how information can be accessed and shared.

Salesforce security teams need the ability to audit privileged users, be notified when changes are made, and understand exactly what changed to assess risk.

  1. Are your Salesforce Communities exposing internal data publicly? 

Misconfigurations are one of the easiest ways to unintentionally expose sensitive data. For security teams that aren’t intimately familiar with every configuration within Salesforce (of which there are many!), it’s easy to miss critical gaps.

Check to see if settings for Salesforce Communities, meant to share information with customers, are inadvertently making data accessible to anyone on the internet.

Improve your Salesforce security with DatAdvantage Cloud

With Varonis DatAdvantage Cloud, it’s easy to answer these and other critical security questions about Salesforce and other SaaS apps in your environment, like Google Drive and Box.

DatAdvantage Cloud keeps valuable data in Salesforce secure by monitoring access and activity, alerting on suspicious behavior, and identifying security posture issues or misconfiguration.

Click here to view the full article and visit the Varonis website.

Cloud applications put your data at risk — Here’s how to regain control

961 639 Guest Post

By Yaki Faitelson, Co-Founder and CEO of Varonis

Cloud applications boost productivity and ease collaboration. But when it comes to keeping your organisation safe from cyberattacks, they’re also a big, growing risk.

Your data is in more places than ever before. It lives in sanctioned data stores on premises and in the cloud, in online collaboration platforms like Microsoft 365 and in software-as-a-service (SaaS) applications like Salesforce.

This digital transformation means traditional security focused on shoring up perimeter defenses and protecting endpoints (e.g., phones and laptops) can leave your company dangerously exposed. When you have hundreds or thousands of endpoints accessing enterprise data virtually anywhere, your perimeter is difficult to define and harder to watch. If a cyberattack hits your company, an attacker could use just one endpoint as a gateway to access vast amounts of enterprise data.

Businesses rely on dozens of SaaS applications — and these apps can house some of your organisation’s most valuable data. Unfortunately, gaining visibility into these applications can be challenging. As a result, we see several types of risk accumulating more quickly than executives often realise.

Three SaaS Security Risks To Discuss With Your IT Team Right Now

Unprotected sensitive data. SaaS applications make collaboration faster and easier by giving more power to end users. They can share data with other employees and external business partners without IT’s help. With productivity gains, we, unfortunately, see added risk and complexity.

On average, employees can access millions of files (even sensitive ones) that aren’t relevant to their jobs. The damage that an attacker could do using just one person’s compromised credentials — without doing anything sophisticated — is tremendous.

With cloud apps and services, the application’s infrastructure is secured by the provider, but data protection is up to you. Most organisations can’t tell you where their sensitive data lives, who has access to it or who is using it, and SaaS applications are becoming a problematic blind spot for CISOs.

Let’s look at an example. Salesforce holds critical data — from customer lists to pricing information and sales opportunities. It’s a goldmine for attackers. Salesforce does a lot to secure its software, but ultimately, it’s the customer’s responsibility to secure the data housed inside it. Most companies wouldn’t know if someone accessed an abnormal number of account records before leaving to work for a competitor.

Cloud misconfigurations. SaaS application providers add new functionality to their applications all the time. With so much new functionality, administrators have a lot to keep up with and many settings to learn about. If your configurations aren’t perfect, however, you can open your applications — and data — to risk. And not just to anyone in your organisation but to anyone on the internet.

It only takes one misconfiguration to expose sensitive data. As the CEO of a company that has helped businesses identify misconfigured Salesforce Communities (websites that allow Salesforce customers to connect with and collaborate with their partners and customers), I’ve seen firsthand how, if not set up correctly, these Communities can also let malicious actors access customer lists, support cases, employee email addresses and more sensitive information.

App interconnectivity risk. SaaS applications are more valuable when they’re interconnected. For example, many organisations connect Salesforce to their email and calendaring system to automatically log customer communication and meetings. Application program interfaces (APIs) allow SaaS apps to connect and access each other’s information.

While APIs help companies get more value from their SaaS applications, they also increase risk. If an attacker gains access to one service, they can use these APIs to move laterally and access other cloud services.

Balancing Productivity And Security In The Cloud

When it comes to cloud applications and services, you must balance the tension between productivity and security. Think of it as a broad, interconnected attack surface that can be compromised in new ways. The perimeter we used to defend has disappeared. Endpoints are access points.

Now consider what you’re up against. Cybercrime — whether it’s malicious insiders or external actors — is omnipresent. If you store sensitive data, someone wants to steal it. Tactics created by state actors have spilled over into the criminal realm, and cryptocurrency continues to motivate attackers to hold data for ransom.

Defending against attacks on your data in the cloud demands a different approach. It’s time for cybersecurity to focus relentlessly on protecting data.

Data protection starts with understanding your digital assets and knowing what’s important. I’ve met with large companies that guess between 5-10% of their data is critical. When ransomware hits, however, somehow all of it becomes critical, and many times they end up paying.

Next, you must understand and reduce your SaaS blast radius — what an attacker can access with a compromised account or system.

An attacker’s job is much easier if they only need to compromise one account to get access to your sensitive data. Do everything you can to limit access to important and sensitive data so that employees can only access what they need to do their jobs. This is one of the best defenses, if not the best defense against data-related attacks like ransomware.

Once you’ve locked down critical data, monitor and profile usage so you can alert on abuse and investigate quickly. Attackers are more likely to trigger alarms if they have to jump through more hoops to access sensitive data.

If you can’t visualize your cloud data risk or know when an attack could be underway, you’re flying blind.

If you can find and lock down important data in cloud applications, monitor how it’s used and detect abuse, you can solve the lion’s share of the problem.

This is the essence of zero trust— restrict and monitor access, because no account or device should be implicitly trusted, no matter where they are or who they say they are. This makes even more sense in the cloud, where users and devices — each one a gateway to your critical information — are everywhere.

This article first appeared on Forbes.

YAKI FAITELSON

Co-Founder and CEO of Varonis, responsible for leading the management, strategic direction, and execution of the company.

Normalising data leaks: A dangerous step in the wrong direction

960 640 Guest Post

It was only recently, in early April, when it came to light that the personal data from over 500 million Facebook profiles had been compromised by a data leak in 2019. And since then, an internal Facebook email has been exposed, which was accidentally sent to a Belgian journalist, revealing the social media giant’s intended strategy for dealing with the leaking of account details from millions of users. Worryingly, Facebook believes the best approach is to ‘normalise the fact that this activity happens regularly,’ and to frame such data leaks as a ‘broad industry issue’. 

It’s true that data breaches occur everyday, and are increasingly on the rise – new research predicts there will be a cyber attack every 11 seconds in 2021, nearly twice what it was in 2019. However, this doesn’t mean that it should be normalised. Quite the opposite in fact, explains Andrea Babbs, UK General Manager, VIPRE SafeSend...

Dangerously dismissive

The statement from Facebook is a very worrying strategy to come from a business which holds the personal and business data of millions across its platforms. Particularly in the wake of increasingly stringent regulations appearing globally, it is startling for such a large organisation to casually dismiss data leaks. To give businesses an excuse to no longer invest time, money and effort in data security is a dangerous step in the wrong direction.

Personal data is a valuable currency for cyber hackers, and individuals want to ensure it is protected. Leaking this confidential data, such as medical information, credit card numbers or personally identifiable information (PII) can have far-reaching consequences for both individuals and businesses. Keeping this data safe should be businesses’ number one priority. However, data is only as safe as the strength of an organisation’s IT security infrastructure and its users’ attention to detail.

A defence on multiple fronts

If you do not have the right technology in place to keep your data safe, then you will face problems – but the same goes for having the right tools and training available to your users. Data security is a difficult and never-ending task, one which requires ongoing investments on multiple fronts by every organisation in the world.

Particularly in the wake of COVID-19, businesses have had to transition to remote working and accelerate their processes to the cloud. Moving to cloud based security which moves with your users is key. And investment in user training will become more normalised because an uneducated workforce is a big risk to an organisation’s data security efforts. 

To combat such threats, deploying a layered security approach is necessary for both small and large businesses. In today’s modern threat landscape, a data protection plan needs to include cover for both people and technology at its core. There are innovative tools available, such as VIPRE’s SafeSend, which supports busy, distracted users to double check their attachments or recipient list before sending an email to help them make more informed decisions around the security of their data. Additionally, companies need to invest in thorough and more frequent security awareness training programmes, which include phishing simulations as a key component.

We will also see a bigger move towards Zero Trust Network Access (ZTNA) tools – which only allow people to access the data they need, not the entire network. There will be an evolution in this area, and protection for a workforce ‘on the go’ will become the standard, but with the same foundational principles of investing in the right technology, and the users themselves. 

Reputation and responsibility

No matter where users are or what they are doing, keeping security front of mind will be one way to ensure good IT security hygiene for businesses. Those who have already made significant progress in this area will reap the rewards in terms of safe data and reassured customers, clients and prospects. 

Businesses that get out in front of all areas of data loss, not just attacks from bad actors, are the ones that will do well in the long term. The ability to reassure customers and prospects of the safety of their data will become the new marketing message in the coming years, which is why attempting to normalise data loss could be so damaging to Facebook’s reputation.

Cyber threats are only going to increase in sophistication and become more personalised to the individual by using social engineering attacks or fileless based attacks. Attackers are going to continue to take advantage of current events, such as COVID-19, to trick users into clicking a link, downloading an attachment or signing into a phishing website etc.

Businesses of all sizes have a responsibility to keep data secure – and users must be a part of the solution, rather than the problem. In order to do this, businesses need to place cybersecurity as a priority throughout their processes and invest in the right tools and training to make this more of a business-critical solution, and less of an ‘emerging necessity’ as it is now.