How Milton Keynes Youth Offending Team uses ChildView to support its work

How Milton Keynes Youth Offending Team uses ChildView to support its work

Milton Keynes Youth Offending Team, is part of the multi agency youth justice partnership involving Milton Keynes Council, Thames Valley Police, Education and Public Health. The team started using Childview, a specialist youth offending information system from CACI, in 2009 following migration from their previous YOIS System. ChildView is used by 31 multi-disciplinary workers at Milton Keynes Council and the team has 160 active youth justice cases at the time of writing.

The administrative problems solved by ChildView

The youth offending team at Milton Keynes Council was using a system of spreadsheets to process and record information. The team realised that ChildView would provide an integrated whole service recording and reporting solution to reduce and enhance oversight across cases and referrals into and out of its services.

“ChildView can hold all the information we need and allow active case management,” says Phil Coles, business support and information manager Milton Keynes Youth Offending Team. “I know some YOTs have issues with aspects of their youth justice work. Generally, I’ve found that these issues are due to not having defined business processes that support (or dictate) the recording practices. Using a system like ChildView helps us to define our processes, whilst maintaining all our data in the same place.

“An example of this is the active management of referrals. By using agreed recording processes, we can instantly see which cases have been referred to another agency and whether they have reviewed the case yet. Then we can see when they accept the case and, finally, when they complete their work for us. This used to be managed in folders, then it became a spreadsheet but – by mapping processes – we’ve now got it to a single ChildView report which has a variety of views for each type of referral and whether it’s active or complete. We are also able to provide all stats that have been requested so far, for example how many referrals have been made (or completed) during a period.”

The benefits of ChildView for Milton Keynes YOT

With the underlying importance of the complex work increasingly undertaken by the team, this enables risks to be captured and tracked in near real time. This facilitates holistic case formulation to ensure vulnerable young people in the area achieve the best possible responses. To this end, being able to report on activities and send and receive data in real time on incidents and cases is vital.

“I have written about 150 reports, many of which contain multiple views, and have found that ChildView facilitates rapid access to information for myself and my team,” says Phil. “We are able to store all necessary documents within the application and are just looking at using the communications module to further integrate letters into the system.”

The built-in reporting functionality with ChildView has also supported Milton Keynes YOT. “It’s sufficient for the majority of requests that we receive,” adds Phil. This helps to meet the needs of the service, with relevant information being captured in locally defined reports. ChildView also uniquely transfers whole case data records between YOTs, which increases accuracy and reduce the effort and risk in tracking young people as they move localities.

Being able to send, receive and view the full case management story, relational history and context swiftly and securely makes it much easier for YOTs to engage and formulate an effective response with incoming cases, crucially being able to understand what has happened to each young person.

Support from CACI’s specialist team

“I’ve always had excellent support from CACI when making queries or raising issues,” says Phil. “There have been times when a resolution has taken time to arrive at, but they are always worked on. Raising queries is very straightforward and the team is always quick to respond.”

CACI, as part of its service level agreement, responds to all ChildView support queries received by 5pm on the same day. This helps to give clarity over how issues and queries are dealt with and to provide practical next steps. The support desk is staffed from 9-5:30 Monday to Friday, with 24/7 web support call logging available as well.

“Myself and my team have generally found ChildView to be easy to use,” concludes Phil. “It does what we need it to do and I haven’t been asked for anything that I haven’t been able to get out of the system.”

For more information on ChildView, please visit: www.caci.co.uk/childview

Why do you need a Zero Trust Model?

Why do you need a Zero Trust Model?

Traditional cybersecurity paradigms focus on network-based security strategies like firewalls and other tools to monitor user activities on the network. However, digital transformation and social environment factors have driven new cybersecurity strategies to focus on protecting end-users, assets and resources. This is the prototype of the ‘Zero Trust Model’.

In this new blog series, I’ll explain the reasons for transforming to a Zero Trust Model as well as the benefits and challenges of implementing Zero Trust Network Architecture. I’ll also cover how you might efficiently implement it.

What is a Zero Trust Model?

But first things first, what exactly do we mean by ‘Zero Trust’? Well, Zero Trust is not a technology, it’s a security model with a set of guiding principles for workflow, system design and operations that can be used to improve the security posture of any classification or sensitivity level.

Zero Trust is a transformative journey rather than a complete replacement of technology. Ideally you should evaluate the security risks in your business model to before you start shifting to Zero Trust. Yet, during the COVID-19 pandemic, many companies were forced to quickly change their central-breakout remote access VPN to a Zero Trust using cloud-based equivalents such as Netskope, Zscaler or Tailscale. They then had to progressively apply Zero Trust principles, process changes and technology solutions that protect their data assets and business functions as they went along.

Now, they are often left having to operate in a hybrid Zero Trust/ perimeter-based mode while continuing to invest in IT modernisation initiatives and improve business processes – not ideal.

Why do we need a Zero Trust Model?

In the new working environment that we find ourselves, we need to look at a Zero Trust Model for a number of reasons:

Remote work and BYOD policy for employees
In the post-COVID era, remote working and a BYOD (Bring Your Own Device) policy have become the new normal. The “castle-and-moat” network security model in fixed office locations and employer-owned devices cannot cater for every employee’s needs. More staff are working out of the office using their own devices and Wi-Fi networks which are less secure in a remote working environment than in an office. You need to take a micro-level approach to authenticate and approve every access request in your network to make sure it’s secure.

Digital transformation of customer experience
The customer journey is not limited to retail shops and customers are rapidly shifting their buying behaviour to the internet and mobile applications. Thousands of personal computers and devices across the globe connect to company networks to complete transactions. This means that identity verification is critical for customers before they access their confidential data.

The high complexity of network architecture
In response to the high demand for rapid and secure access to data, anytime and anywhere, your company may operate several internal and external networks. These can include on-premises systems and cloud environments. Perimeter-based network security is then insufficient once attackers breach it whereas Zero Trust Network Architecture adds additional security layers to identity verification, such as a least-privilege access control, multi-factor authentication and endpoint verification for improved security.

Zero Trust forms a strong defence line against cyberattacks

With all these social and network environment changes, the opportunity for your network to be attacked is much greater. The median cost of cyberattacks in the UK rose 29% in 2022 with an average attack costing a business nearly £25,000.

Legacy systems, regulations and compliance practices are no longer sufficient amid increasingly sophisticated threats. Cloud environments are attractive targets for cybercriminals aiming to destroy and steal confidential business data. Amongst these different types of cyberattacks, ransomware tactics have evolved and become the most significant threat.

The Zero Trust Model is another approach to combat the emerging threat landscape which legacy security systems and perimeter approaches can no longer adequately mitigate.

How CACI can help

CACI has cybersecurity experts who can improve the protection levels of your business. Capabilities include Zero Trust Network Architecture, Threat Analytics, Systems Hardening, Network Analytics and Next Generation Firewalls. We perform risk assessment to advise clients on what comprehensive cybersecurity they need.

We also have experts in Cloud Network on-ramp Connectivity such as Microsoft ExpressRoute, AWS DirectConnect, GCP Cloud Dedicated Connect) and SASE/SdP/VPN technologies such as Zscaler and Tailscale.

In my next blog, I will be discussing the benefits and the challenges of implementing Zero Trust Network Architecture. However, if you want the whole story, have a read of our Zero Trust Model whitepaper where we cover everything in these blogs and more. Download your copy now.

 

Notes:
[1] Zero Trust Network Architecture (nist.gov)
[2] Will Rishi Sunak reassess UK cybersecurity policies? (openaccessgovernment.org)

How to find the right IT outsourcing partner

How to find the right IT outsourcing partner

Looking to work with an IT outsourcing provider? Finding the right partner to deliver your requirements can be a tricky and time-consuming process. But, done right, a successful outsourcing relationship can bring long-term strategic benefits to your business. We asked our experts to share their top tips on how to find the right IT outsourcing partner.

Evaluate capabilities

Having the right expertise is the obvious and most essential criterion, so defining your requirements and expectations is the best way to start your search.

When it comes to narrowing down your vendor choices, it’s important to consider the maturity of an organisation as well as technical capabilities. “The risk of working with a small, specialised provider is that they may struggle to keep a handle on your project,” warns Brian Robertson, Resource Manager at CACI. Inversely, a larger organisation may have the expertise, but not the personal approach you’re looking for in a partner. “Always look for a provider that demonstrates a desire to get to the root of your business’s challenges and can outline potential solutions,” Brian advises.

Find evidence of experience

Typically, working with an outsourcing provider that has accumulated experience over many years is a safe bet; however, Daniel Oosthuizen, Senior Vice President of CACI Network Services, recommends ensuring that your prospective outsourcing provider has experience that is relevant to your business, “When you bring in an outsourcing partner, you want them to hit the ground running, not spending weeks and months onboarding them into your world.” Daniel adds, “This becomes more apparent if you work in a regulated industry, such as banking or financial services, where it’s essential that your provider can guarantee compliance with regulatory obligations as well as your internal policies.”

So, how can you trust a provider has the experience you’re looking for? Of course the provider’s website, case studies, and testimonials are a good place to start, but Daniel recommends interrogating a vendor’s credentials directly, “A successful outsourcing relationship hinges on trust, so it’s important to get a sense of a vendor’s credibility early on. For example, can they demonstrate an in-depth knowledge of your sector? Can they share any details about whom they currently partner with? And can they confidently talk you through projects they’ve completed that are similar to yours?”

Consider cultural compatibility

“When it comes to building a strong, strategic and successful outsourcing partnership, there’s no greater foundation than mutual respect and understanding,” says Brian. Evaluating a potential provider’s approach and attitudes against your business’s culture and core values is another critical step in your vetting process. As Daniel says, “If you share the same values, it will be much easier to implement a seamless relationship between your business and your outsourcing partner, making day-to-day management, communication and even conflict resolution more effective and efficient”.

While checking a company’s website can give you some insight into your prospective provider’s values, it’s also worth finding out how long they’ve held partnerships with other clients, as that can indicate whether they can maintain partnerships for the long-term.

However, Daniel says, “The best way to test if a provider has partnership potential is to go and meet them. Get a feel for the team atmosphere, how they approach conversations about your challenges, and how their values translate in their outsourcing relationships.” Brian adds, “Your vision and values are what drive your business forward, so it’s essential that these components are aligned with your outsourcing provider to gain maximum value from the relationship.”

Assess process and tools

Once you’ve determined a potential outsourcing provider’s level of experience and expertise, it’s important to gain an understanding of how they will design and deliver a solution to meet your business’s needs. “It’s always worth investigating what tech and tools an outsourcing provider has at their disposal and whether they are limited by manufacturer agreements. For example, at CACI, our vendor-agnostic approach means we’re not tied to a particular manufacturer, giving us the flexibility to find the right solution to meet our clients’ needs,” Daniel explains

Speaking of flexibility, determining the agility of your potential outsourcing provider’s approach should play a role in your selection process. “There’s always potential for things to change, particularly when delivering a transformation project over several years,” says Brian, adding “that’s why it’s so important to find a partner that can easily scale their solutions up or down, ensuring that you’ve always got the support you need to succeed.”

Determine quality standards

Determining the quality of a new outsourcing partner’s work before you’ve worked with them can be difficult, but there are some clues that can indicate whether a vendor’s quality standards are in line with your expectations, says Daniel, “A good outsourcing partner will be committed to adding value at every step of your project, so get details on their method and frequency of capturing feedback, whether the goals they set are realistic and achievable, and how they manage resource allocation on projects.”

Brian also recommends quizzing outsourcing providers about their recruitment and hiring process to ensure that you’ll be gaining access to reliable and skilled experts, “It’s easy for an outsourcing provider to say they have the best people, so it’s important to probe a little deeper. How experienced are their experts? How are they ensuring their talent is keeping up to date? What is their process for vetting new candidates? All these questions will help to gain an insight into an outsourcing provider’s quality bar – and whether it’s up to your standard.”

Assess value for money

For most IT leaders, cost is one of the most decisive factors when engaging any service; however,
when looking for an IT outsourcing partner, it’s critical to consider more than just a provider’s pricing model. “Contractual comprehensiveness and flexibility should always be taken into account,” says, Brian. “A contract that is vague can result in ‘scope creep’ and unexpected costs, while a rigid contract can tie businesses into a partnership that’s not adding value.” He adds, “Ultimately, it comes down to attitude, a good outsourcing provider can quickly become a great business partner when they go the extra mile.”

Daniel agrees and advises that IT leaders take a holistic view when weighing up potential outsourcing partners, “Look beyond your initial project, or resource requirements and consider where your business is heading and whether your shortlisted providers can bring in the skills and services you need. After all, a truly successful outsourcing partnership is one that can be relied on for the long haul.”

Looking for an outsourcing partner to help with your network operations? Contact our expert team today.

UX: Let’s make tech accessible

UX: Let’s make tech accessible

disabled tech

It’s not a new concept: from lifts on the Underground to ramps into public buildings, we’re all used to seeing the real-life equivalent of accessibility features as we go about our day. Airbnb hosts are encouraged to list any issues or benefits on their ads. Public buildings and new built spaces are expected to take disabled visitors’ needs into account as well.

However, challenges still prevail, both in technology and in real life. Despite the fact that over 10 million people (over 18% of the population) have a limiting long-term illness, impairment or disability, they are often simply forgotten.

As in life, so it is online

Like restaurants that have invested in wheelchair ramps but hidden them at the back of the building, lots of ‘real life’ and online places are technically accessible. But the extra time and effort needed to use it means the problem isn’t really being solved and disabled people are still being excluded.

In fact some measures seem to have been taken with an insultingly thoughtless, check-box mentality. In June 2022, Wireless Festival at Crystal Palace decided to pitch the accessible viewing platform at the top of a hill to save money, requiring patrons’ friends to push their wheelchairs up a 10% incline or carry them! I wonder how many websites are similarly inconsiderate of actual needs for certain users.​​​​​​​​​​​​​​

On the other hand, treasured old buildings and ancient pieces of tech alike were often simply not built with accessibility in mind. When visiting Madame Tussauds with a friend who walks with a stick and finds stairs agonising, we used a total of 4 randomly located lifts to access 5 floors. They required us to weave through exhibits the wrong way and wait around for staff help. As a mind-bending response to a building that’s almost two hundred years old it’s better than nothing, but nobody would design it that way if they’d thought about accessibility first. ​​​​​​​

Online leads the way

Online systems that are built first and add accessibility only once the product is complete face similar risks. The infrastructure of our lives is no longer solely built around physical spaces: it’s built around online ones too, where we now conduct every conceivable part of our lives. According to a Deque survey and research, 73% of accessibility professionals saw an increase in accessibility awareness on digital channels throughout the pandemic. Not being able to access these spaces can hugely restrict access in their lives, restricting them from opportunities. ​​​​​​​

Actively discriminating against anyone is of course illegal – and there can be hefty fines and reputational damage for not adhering to WCAG standards. What’s often forgotten is that systems that don’t think about disabled users ultimately exclude by default. It’s worth remembering that anyone can become disabled, even if it’s just a broken arm that restricts typing for six weeks or an ear infection that leaves you temporarily deaf. More than that, accessibility features benefit all users such as captions on video content benefitting a user in a noisy office. We all win when accessibility is considered. ​​​​​​​

Value UX and value your users

Code is easier to rework than bricks and mortar. But what’s easiest of all is building things right from the beginning . Understanding that all users need an equally positive experience is crucial.

Karen Hawkins of eSSENTIAL Accessibility, the world’s #1 Accessibility-as-a-Service platform, has emphasised the importance of making sure ‘foundational elements are as accessible as possible, these foundational elements being colours, but also typography, small atoms and molecules, like your buttons and your links and your text boxes – they get used everywhere’.

Adopting the right mindset where accessibility is the default and not a bolt-on is an ideal way to start. Don’t stop at whether it is possible for a disabled user to complete a task – also consider how easy and fast it is too. ​​​​​​​

Ask your customers about their disabled user base and see if you can speak to disabled users as part of gathering requirements. However, they may not have the best visibility of such users – in fact the customer may not have put any thought into accessibility at all. This can be an area where tech developers can provide leadership as well as creative ideation about the potential needs of unknown users.

Specific accessibility features might include using subtitles or transcripts for all video content. Or it could involve using a high contrast ratio between text and background, relying on more than just colour to convey important information. Furthermore – do things like screen readers work accurately? Will the screen flash causing fits in some users? How about automatic log outs due to inactivity – which could impact users with movement issues, who may take longer completing forms? Will the complexity of any language be difficult for some users? Considering and including these features from the onset as well as testing them on users with disabilities can save time and money later on.

​​​​​​​​​​​​​​Accessibility is about so much more than speaking to any one user: it’s about challenging your expectations of who will ultimately end up using your product. Tim Berners-Lee, the intervenor of the internet, said that ‘The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.’ A software product is only as good as its end users find it to be: design that needlessly excludes potentially 20% of the working population should be seen as a failure. Design that includes everyone is the ultimate success.
To find out more about our capabilities in this area, please check our Digital Design, Build & Operate page.

How much design is enough?

How much design is enough?

Imagine two people are decorating houses, side by side. One wants every detail mapped out in advance, researching all the possibilities and putting in a massive order before seeing anything in person. The other prefers a more spontaneous approach. They might have a vague outline of the sort of house they’d like, but they’d prefer to make it up as they go along.

As things come together, the first person realises that nothing they’ve committed to quite looks or goes together in the way they imagined and there’s no real turning back. The second has a rather more chaotic process, but everything that goes into their house is absolutely fabulous. It’s only at the very end that they realise they have painted the same room seven different colours throughout the process.

These ways of thinking shape more than just our interior décor – they crucially apply to how we understand tech and software development. Committing to a large amount of architecture before kicking off is no longer considered best practice, but including it is still vitally important. Architects, developers and potential clients are left to decide – how much design is enough?

Getting it wrong

Without architecture, the bigger picture quickly gets lost. For instance, a developer might be working on new functionality that will be shared to various departments. Developing it for one customer in one department is fairly straightforward. However – have they considered all of the flows and interactions with other parts of the business? Is there a potential to consolidate some functions into a shared one stop shop service?

Architecture

Good architecture provides an awareness of dependencies, interactions and other contextual drivers, like legacy systems and stakeholder mapping. If you want something that’s more than the sum of its parts, it’s essential.

Too much upfront design though, creates a very long feedback loop where you’ve built half a system before you have any clue if any of it works. In the worst cases, “solutioneering” takes over and the design itself – sometimes pre-issued by the client, with tech already decided – becomes more important than understanding and meeting the requirements. By that point, whether or not it actually benefits the end user has probably been completely forgotten.

Most often, things go wrong when architects and developers don’t talk to each other. Each withdraws into an ivory tower and fails to communicate or remember the benefits of collaboration. As a formalised process, architecture can become too distant from the reality of building it and too rigid to flex to new information that arises from agile iterations.

How do we get it right?

​​​​​​​Agile has taken over – and architecture must flex to fit in. This means greater levels of collaboration, working hand in hand with development teams.

working hand in hand

Breaking up the architecture approach so that it’s completed in segments that align with actual development can keep the process one step ahead of the actual build while ensuring it’s still adaptable. This can also allow both sides of the work to both validate and verify: build the right thing via architecture that focusses on big picture goals, the right way through feedback focussed iterations. Features will not just be effective in their immediate goal but in the broader context of the software.

Architectural principles and patterns can also be vitally helpful by collaboratively establishing the broad guidelines for architectural decisions that will be made later on. To go back to our house designing metaphor, you might not decide exactly what furniture is going into each room, but you might decide on distinct colour schemes that harmonise with each other.

Together, principles and patterns keep services and features aligned and consistent. Not every detail is planned out, but there will be a clear understanding of how things like naming conventions and interactions will be done and how users will be authenticated. That can be easily replicated in the future while still leaving flexibility around it.

At its best, architecture works in harmony with other delivery roles, working toward the same goal and focussing on software that solves problems for the client and the end user. Balancing development and architecture means finding effective methods to maximise both capabilities and harmonising with each other. In this, as in most other things, teamwork and collaboration is key.

To find out more about our capabilities in this area, check out our IT Solution Architecture & Design page.

 

Digital Twin: Seeing the Future

Digital Twin: Seeing the Future

 

Predicting what’s coming next and understanding how best to respond is the kind of challenge organisations struggle with all the time. As the world becomes less predictable and ever-changing technology transforms operations, historical data becomes harder to extrapolate. And even if you can make reasonable assumptions about future changes, how they will impact on the various aspects of your business is even more problematic.

Decision makers need another tool in their arsenal to help them build effective strategies that can guide big changes and investments. They need to combine an understanding of their setup with realistic projections of how external and internal changes could have an impact. A Digital Twin built with predictive models can combine these needs, giving highly relevant and reliable data that can guide your future course.

The Defence Fuels Prototype

Using Mood Software and in collaboration with the MOD’s Defence Fuels Transformation, CACI built a digital twin focused on fuel movement within an air station. With it we aimed to understand the present, but also crucially, to predict the near future and test further reaching changes.

We used two kinds of predictive model that can learn from actual behaviour. For immediate projections, we implemented machine learning models that used a small sample of historical data concerning requirements for refuelling vehicles given a certain demand, allowing an ‘early warning system’ to be created.

However, we knew that the real value came in understanding what’s further ahead, where there is a higher risk of the wrong decision seriously impacting the success of operations. We adapted and integrated an existing Defence Fuels Enterprise simulation model, Fuel Supply Analysis Model (FSAM), to allow the testing of how a unit would operate given changes to the configuration of refuelling vehicles.

Functions were coded in a regular programming language to mimic the structural model and to mimic the kinds of behaviour that is evidenced through the data pipeline. As a result, we are able to make changes to these functions to easily understand what the corresponding changes would be in the real world.

This allows decision makers to test alternative solutions with the simulation models calibrated against existing data. Models informed by practical realities enables testing with greater speed and confidence so you have some likely outcomes before committing to any change.

 

What does this mean for me?

Digital Twins are extremely flexible pieces of technology that can be built to suit all kinds of organisations. They are currently in use in factories, defence, retail and healthcare. Adaptable to real world assets and online systems, it’s hard to think of any area they couldn’t be applied to.

Pairing a digital representation of your operations, processes and systems with predictive and simulation models allows substantial de-risking of decision making. You can predict what will happen if your resourcing situation changes, and plan accordingly; you can also understand the impact of sweeping structural changes. The resulting data has been proven against real-world decisions, making it truly reliable.

Time magazine has predicted that Digital Twins will ‘shape the future’ of multiple industries going forward and I think it’s hard to argue with that.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

For more on Mood Software and how it can be your organisation’s digital operating model, visit the product page.

How ethical is machine learning?

How ethical is machine learning?

We all want tech to help us build a better world: Artificial Intelligence’s use in healthcare, fighting human trafficking and achieving gender equity are great examples of where this is already happening. But there are always going to be broader ethical considerations – and as AI gets more invisibly woven into our lives, these are going to become harder to untangle.

What’s often forgotten is that AI doesn’t just impact our future – it’s fuelled by our past. Machine learning, one variety of AI, learns from previous data to make autonomous decisions in the present. However, which parts of our existing data we wish to use as well as how and when we want to apply them is highly contentious – and it’s likely to stay that way.

A new frontier – or the old Wild West?

For much of human history, decisions were made that did not reflect current ideals or even norms. Far from changing the future for the better, AI runs the risk of mirroring the past. A computer program used by a US court for risk assessment proved to be highly racially biased, probably because minority ethnic groups are overrepresented in US prisons and therefore also in the data it was drawing conclusions from.

This demonstrates two dangers: repeating our biases without question and inappropriate usage of technology in the first place. Supposedly improved systems are still being developed and utilised in this area, with ramifications on real human freedom and safety. Despite its efficiencies, human judgement is always going to have its place.​​​​​​​

The ethics of language modelling, a specific form of machine learning, are increasingly up for debate. At its most basic it provides the predictive texting on your phone, using past data to guess what’s needed after your prompt. On a larger scale, complex language models are used in natural language processing (NLP) applications, applying algorithms to create text that reads like real human writing. We already see these in chatbots – with results that can range from the useful to the irritating to the outright dangerous.

At the moment, when we’re interacting with a chatbot we probably know it – in most instances the language is still a little too stilted to pass as a real human. But as language modelling technology improves and becomes less distinguishable from real text, the bigger opportunities – and issues – are only going to be exacerbated.

Where does the data come from?

GPT-3, created by OpenAI, is the most powerful language model yet: from just a small amount of input, it can generate a vast range, and amount, of highly realistic text – from code to news reports to apparent dialogue. According to its developers ‘Over 300 applications are delivering GPT-3–powered search, conversation, text completion and other advanced AI features’.

And yet MIT’s Technology Review described it as based on ‘the cesspits of the internet’. Drawing indiscriminately on online publications, including social media, it’s been frequently shown to spout racism and sexism as soon as it’s prompted to do so. Ironically, with no moral code or filter of its own, it is perhaps the most accurate reflection we have of our society’s state of mind. It, and models like it, are increasingly fuelling what we read and interact with online.​​​​​​​

​​​​​​​Human language published on the internet, fuelled by algorithms that encourage extremes of opinion and reward anger, has already created enormous divisions in society, spreading misinformation that literally claims lives. Language models that generate new text indiscriminately and parrot back our worst instincts could well be an accelerant. ​​​​​​​

The words we use

Language is more than a reflection of our past; it shapes our perception of reality. For instance, the Native American Hopi language doesn’t treat time in terms of ‘chunks’ like minutes or hours. Instead they speak, and indeed think of it, as an unbroken stream that cannot be wasted. Other examples span across every difference in language, grammar, sentence structure – both influencing and being influenced by our modes of thinking.

The language we use has enormous value. If it’s being automatically generated and propagated everywhere, shaping our world view and how to respond to it, it needs to be done responsibly, fairly and honestly. Different perspectives, cultures, languages and dialects must be included to ensure that the world we’re building is as inclusive, open and truthful as possible. Otherwise the alternate perspectives and cultural variety they offer could become a thing of the past.

What are the risks? And what can we do about them?

Ethical AI

Language and tech are already hard to regulate due to the massive financial investment required to create language models. It’s currently being done by just a few large businesses that now have access to even more power. Without relying on human writers, they could potentially operate thousands of sites that flood the internet with automatically written content. Language models can then learn what characteristics result in viral spread and repeat, learn from that, and repeat, at massive quantity and speed.

Individual use can also lead to difficult questions. A developer used GPT-3 to create a ‘deadbot’ – a chatbot based on his deceased fiancée that perfectly mimicked her. The idea of chatbots that can mask as real, live people might be thrilling to some and terrifying to others, but it’s hard not to imagine feeling squeamish about a case like that. ​​​​​​​

Ultimately, it is the responsibility of developers and businesses everywhere to consider their actions and the future impact of what they create. Hopefully positive steps are being made. Meta – previously known as Facebook – has taken the unparalleled step of making their new language model completely accessible to any developer, along with details about how it was trained and built. According to Meta AI’s managing director, ‘We strongly believe that the ability for others to scrutinize your work is an important part of research. We really invite that collaboration.’

The opportunities for AI are vast, especially where it complements and augments human progress toward a better, more equal and opportunity-filled world. But the horror stories are not to be dismissed. As with every technological development, it’s about whose hands it’s put it in – and who they intend to benefit.

To find out more about our capabilities in this area, check out our DevSecOps page.

 

What can a Digital Twin do for you?

What can a Digital Twin do for you?

Digital Twin

Meaningfully improving your organisation’s operations sometimes requires more than just tinkering: it can require substantial change to bring everything up to scratch. But the risks of getting it wrong, especially for mission critical solutions depended on by multiple parties, frequently turn decision makers off. What if you could trial that change, with reliable predictions and the potential to model different scenarios, before pushing the button?

CACI’s Digital Twin offers just that capability. Based on an idea that’s breaking new ground from businesses like BMW to government agencies like NASA, it gives decision makers a highly accurate view into the future. Working as a real-time digital counterpart of any system, it can be used to simulate potential situations on the current set up, or model the impact of future alterations.

Producing realistic data (that’s been shown to match the effects of actual decisions once they’ve been undertaken), this technology massively reduces risk across an organisation. Scenario planning is accelerated, with enhanced complexity, resulting in better alignment between decision makers.

What are Digital Twins doing right now?

From physical assets like wind turbines and water distribution, Digital Twins are now being broadly used for business operations, and federated to tackle larger problems, like the control of a ‘smart city’. They’re also being used for micro-instances of highly risky situations, allowing surgeons to practice heart surgery, and to build quicker, more effective prototypes of fighter jets.

Recently, Anglo American used this technology to create a twin of its Quellaveco mine; ‘digital mining specialists can perform predictive tests that help reduce safety risks, optimise the use of resources and improve the performance of production equipment’. Interest is increasingly growing in this tech’s potential use within retail, where instability from both supply and demand sides have been causing havoc since the pandemic.

This technology allows such businesses to take control of their resources, systems and physical spaces, while trialling the impact of future situations before they come to pass. In a world where instability is the new norm, Digital Twins supersede reliance on historical data. They also allow better insight and analysis into current processes for quicker improvements, and overall give an unparalleled level of transparency.

Digital twin data visual

Where does Mood come in?

Mood Software is CACI’s proprietary data visualisation tool and has a record of success in enabling stakeholders to better understand their complex organisations. Mood is crucial to CACI’s Digital Twin solution as it integrates systems to create a single working model for management and planning. It enables collaborative planning, modelling and testing, bringing together stakeholders so they can work to the same goals.

Making effective decisions requires optimal access to data – and the future is one area we don’t have that on. But with Digital Twin technology, you are able to draw your own path, and make decisions with an enhanced level of insight.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

7 Steps to Strong Cloud Security

7 Steps to Strong Cloud Security

 

Demand for cloud-based offerings has accelerated due to the COVID-19 pandemic, with the importance of flexibility and agility now being realised. Without adapting, businesses risk being left behind, but what are the benefits and how do you know if it’s the right solution for you?

We shared the key advantages of cloud adoption and challenges in cloud security in our previous blogs.

In our final article in this series of blogs, we share the key steps to strengthen your organisations cloud security.

As more businesses adopt cloud technology, primarily to support hybrid working, cybercriminals are focusing their tactics on exploiting vulnerable cloud environments. Last year, a report found that 98% of companies experienced at least one cloud data breach in the past 18 months up from 79% in 2020. Of those surveyed, a shocking 67% reported three or more incidents.

This issue has been exacerbated by soaring global demand for tech talent. According to a recent survey, over 40% of IT decision-makers admitted to their business having a cyber security skills gap.
It’s a vulnerable time for enterprise organisations, and cloud security is the top priority for IT leaders. Here we consider the critical steps you can take now to make your business safer.

1. Understand your shared responsibility model

Defining and establishing the split of security responsibilities between an organisation and its CSP is one of the first steps in creating a successful cloud security strategy. Taking this action will provide more precise direction for your teams and mean that your apps, security, network, and compliance teams all have a say in your security approach. This helps to ensure that your security approach considers all angles.

2. Create a data governance framework

Once you’ve defined responsibilities, it’s time to set the rules. Establishing a clear data governance framework that defines who controls data assets and how data is used will provide a streamlined approach to managing and protecting information. However, setting the rules is one thing; ensuring they’re carefully followed is another – employing content control tools and role-based access controls to enforce this framework will help safeguard company data. Ensure your framework is built on a solid foundation by engaging your senior management early in your policy planning. With their input, influence, and understanding of the importance of cloud security, you’ll be better equipped to ensure compliance across your business.

3. Opt to automate

In an increasingly hostile threat environment, in-house IT teams are under pressure to manage high numbers of security alerts. But it doesn’t have to be this way. Automating security processes such as cybersecurity monitoring, threat intelligence collection, and vendor risk assessments means your team can spend less time analysing every potential threat, reducing admin errors and more time on innovation and growth activities.

4. Assess and address your knowledge gaps

Your users can either provide a strong line of defence or open the door to cyber-attacks. Make sure it’s the former by equipping the staff and stakeholders that access your cloud systems with the knowledge and tools they need to conduct safe practices, for example, by providing training on identifying malware and phishing emails.
For more advanced users of your cloud systems, take the time to review capability and experience gaps and consider where upskilling or outsourcing is required to keep your cloud environments safe.

5. Consider adopting a zero-trust model

Based on the principle of ‘Never Trust, Always Verify’, a zero-trust approach removes the assumption of trust from the security architecture by requiring authentication for every action, user, and device. Adopting a zero-trust model means always assuming that there’s a breach and securing all access to systems using multi-factor authentication and least privilege.
In addition to improving resilience and security posture, a zero-trust approach can also benefit businesses by enhancing user experiences via Single Sign-On (SSO) enablement, allowing better collaboration between organisations, and increased visibility of your user devices and services. However, not all organisations can accommodate a zero-trust approach. Incompatibility with legacy systems, cost, disruption, and vendor-lock-in must be balanced with the security advantages of zero-trust adoption.

6. Perform an in-depth cloud security assessment

Ultimately, the best way to bolster your cloud security is to perform a thorough cloud security audit. Having a clear view of your cloud environments, users, security capabilities, and inadequacies will allow you to take the best course of action to protect your business.

7. Bolster your defences

The most crucial principle of cloud security is that it’s an ongoing process and continuous monitoring is key to keeping your cloud secure. However, in an ever-evolving threat environment, IT and infosec professionals are under increasing pressure to stay ahead of cybercriminals’ sophisticated tactics.

A robust threat monitoring solution can help ease this pressure and bolster your security defence. Threat monitoring works by continuously collecting, collating, and evaluating security data from your network sensors, appliances, and endpoint agents to identify patterns indicative of threats. Threat alerts are more accurate with threat monitoring analysing data alongside contextual factors such as IP addresses and URLs. Additionally, traditionally hard-to-detect threats such as unauthorised internal accounts can be identified.

Businesses can employ myriad options for threat monitoring, from data protection platforms with threat monitoring capabilities to a dedicated threat monitoring solution. However, while implementing threat monitoring is a crucial and necessary step to securing your cloud environments, IT leaders must recognise that a robust security program comprises a multi-layered approach utilising technology, tools, people, and processes.

Get your cloud security assessment checklist and the best cloud security strategies in our comprehensive guide to cloud security.

The 9 Biggest Challenges in Cloud Security

The 9 Biggest Challenges in Cloud Security

Demand for cloud-based offerings has accelerated due to the COVID-19 pandemic, with the importance of flexibility and agility now being realised. Without adapting, businesses risk being left behind, but what are the benefits and how do you know if it’s the right solution for you?

We shared the key advantages of cloud adoption in our previous blog. This time around, we identify the biggest challenges of cloud security.

Cloud adoption has become increasingly important in the last two years, as businesses responded to the Covid-19 pandemic. Yet, a 2020 survey reported that cloud security was the biggest challenge to cloud adoption for 83% of businesses. [1]

As cybercriminals increasingly target cloud environments, the pressure is on for IT leaders to protect their businesses. Here, we explore the most pressing threats to cloud security you should take note of.

1. Limited visibility

The traditionally used tools for gaining complete network visibility are ineffective for cloud environments as cloud-based resources are located outside the corporate network and run on infrastructure the company doesn’t own. Further, most organisations lack a complete view of their cloud footprint. You can’t protect what you can’t see, so having a handle on the entirety of your cloud estate is crucial.

2. Lack of cloud security architecture and strategy

The rush to migrate data and systems to the cloud meant that organisations were operational before thoroughly assessing and mitigating the new threats they’d been exposed to. The result is that robust security systems and strategies are not in place to protect infrastructure.

3. Unclear accountability

Pre-cloud, security was firmly in the hands of security teams. But in public and hybrid cloud settings, responsibility for cloud security is split between cloud service providers and users, with responsibility for security tasks differing depending on the cloud service model and provider. Without a standard shared responsibility model, addressing vulnerabilities effectively is challenging as businesses struggle to grapple with their responsibilities.

In a recent survey of IT leaders, 84% of UK respondents admitted that their organisation struggles to draw a clear line between their responsibility for cloud security and their cloud service provider’s responsibility for security. [2]

4. Misconfigured cloud services

Misconfiguration of cloud services can cause data to be publicly exposed, manipulated, or even deleted. It occurs when a user or admin fails to set up a cloud platform’s security setting properly. For example, keeping default security and access management settings for sensitive data, giving unauthorised individuals access, or leaving confidential data accessible without authorisation are all common misconfigurations. Human error is always a risk, but it can be easily mitigated with the right processes.

5. Data loss

Data loss is one of the most complex risks to predict, so taking steps to protect against it is vital. The most common types of data loss are:

Data alteration – when data is changed and cannot be reverted to the previous state.

Storage outage – access to data is lost due to issues with your cloud service provider.

Loss of authorisation – when information is inaccessible due to a lack of encryption keys or other credentials.

Data deletion – data is accidentally or purposefully erased, and no backups are available to restore information.

While regular back-ups will help avoid data loss, backing up large amounts of company data can be costly and complicated. Nonetheless, 304.7 million ransomware attacks were conducted globally in the first half of 2021, a 151% increase from the previous year.[3] With ransomware attacks surging, businesses can ill afford to avoid the need for regular data backups.

6. Malware

Malware can take many forms, including DoS (denial of service) attacks, hyperjacking, hypervisor infections, and exploiting live migration. Left undetected, malware can rapidly spread through your system and open doors to even more serious threats. That’s why multiple security layers are required to protect your environment.

7. Insider threats

While images of disgruntled employees may spring to mind, malicious intent is not the most common cause of insider threat security incidents. According to a report published in 2021, 56% of incidents were caused by negligent employees. [4]

Worryingly, the frequency of insider-led incidents is on the rise. The number of threats has jumped by 44% since 2020.[5] It’s also getting more expensive to tackle insider threat issues. Costs have risen from $11.45 million in 2020 to $15.38 million in 2022, a 34% increase. [6]

8. Compliance concerns

While some industries are more regulated, you’ll likely need to know where your data is stored, who has access to it, how it’s being processed, and what you’re doing to protect it. This can become more complicated in the cloud. Further, your cloud provider may be required to hold specific compliance credentials.

Failure to follow the regulations can result in substantial legal, financial and reputational repercussions. Therefore, it’s critical to handle your regulatory requirements, ensure good governance is in place, and keep your business compliant.

9. API Vulnerabilities

Cloud applications typically interact via APIs (application programming interfaces). However, insecure external APIs can provide a gateway, allowing threat actors to launch DoS attacks and code injections to access company data.

In 2020, Gartner predicted API attacks would become the most frequent attack vector by 2022. With a reported 681% growth of API attack traffic in 2021,[7] this prediction has already become a reality. Addressing API vulnerabilities will therefore be a chief priority for IT leaders in 2022 and beyond.

Check out our comprehensive guide to cloud security for more

 

Notes:
[1] 64 Significant Cloud Computing Statistics for 2022: Usage, Adoption & Challenges
[2] Majority of UK firms say cyber threats are outpacing cloud security
[3] Ransomware attacks in 2021 have already surpassed last year
[4] – [6] Insider Threats Are (Still) on the Rise: 2022 Ponemon Report
[7] Attacks abusing programming APIs grew over 600% in 2021