How to find the right IT outsourcing partner

How to find the right IT outsourcing partner

Looking to work with an IT outsourcing provider? Finding the right partner to deliver your requirements can be a tricky and time-consuming process. But, done right, a successful outsourcing relationship can bring long-term strategic benefits to your business. We asked our experts to share their top tips on how to find the right IT outsourcing partner.

Evaluate capabilities

Having the right expertise is the obvious and most essential criterion, so defining your requirements and expectations is the best way to start your search.

When it comes to narrowing down your vendor choices, it’s important to consider the maturity of an organisation as well as technical capabilities. “The risk of working with a small, specialised provider is that they may struggle to keep a handle on your project,” warns Brian Robertson, Resource Manager at CACI. Inversely, a larger organisation may have the expertise, but not the personal approach you’re looking for in a partner. “Always look for a provider that demonstrates a desire to get to the root of your business’s challenges and can outline potential solutions,” Brian advises.

Find evidence of experience

Typically, working with an outsourcing provider that has accumulated experience over many years is a safe bet; however, Daniel Oosthuizen, Senior Vice President of CACI Network Services, recommends ensuring that your prospective outsourcing provider has experience that is relevant to your business, “When you bring in an outsourcing partner, you want them to hit the ground running, not spending weeks and months onboarding them into your world.” Daniel adds, “This becomes more apparent if you work in a regulated industry, such as banking or financial services, where it’s essential that your provider can guarantee compliance with regulatory obligations as well as your internal policies.”

So, how can you trust a provider has the experience you’re looking for? Of course the provider’s website, case studies, and testimonials are a good place to start, but Daniel recommends interrogating a vendor’s credentials directly, “A successful outsourcing relationship hinges on trust, so it’s important to get a sense of a vendor’s credibility early on. For example, can they demonstrate an in-depth knowledge of your sector? Can they share any details about whom they currently partner with? And can they confidently talk you through projects they’ve completed that are similar to yours?”

Consider cultural compatibility

“When it comes to building a strong, strategic and successful outsourcing partnership, there’s no greater foundation than mutual respect and understanding,” says Brian. Evaluating a potential provider’s approach and attitudes against your business’s culture and core values is another critical step in your vetting process. As Daniel says, “If you share the same values, it will be much easier to implement a seamless relationship between your business and your outsourcing partner, making day-to-day management, communication and even conflict resolution more effective and efficient”.

While checking a company’s website can give you some insight into your prospective provider’s values, it’s also worth finding out how long they’ve held partnerships with other clients, as that can indicate whether they can maintain partnerships for the long-term.

However, Daniel says, “The best way to test if a provider has partnership potential is to go and meet them. Get a feel for the team atmosphere, how they approach conversations about your challenges, and how their values translate in their outsourcing relationships.” Brian adds, “Your vision and values are what drive your business forward, so it’s essential that these components are aligned with your outsourcing provider to gain maximum value from the relationship.”

Assess process and tools

Once you’ve determined a potential outsourcing provider’s level of experience and expertise, it’s important to gain an understanding of how they will design and deliver a solution to meet your business’s needs. “It’s always worth investigating what tech and tools an outsourcing provider has at their disposal and whether they are limited by manufacturer agreements. For example, at CACI, our vendor-agnostic approach means we’re not tied to a particular manufacturer, giving us the flexibility to find the right solution to meet our clients’ needs,” Daniel explains

Speaking of flexibility, determining the agility of your potential outsourcing provider’s approach should play a role in your selection process. “There’s always potential for things to change, particularly when delivering a transformation project over several years,” says Brian, adding “that’s why it’s so important to find a partner that can easily scale their solutions up or down, ensuring that you’ve always got the support you need to succeed.”

Determine quality standards

Determining the quality of a new outsourcing partner’s work before you’ve worked with them can be difficult, but there are some clues that can indicate whether a vendor’s quality standards are in line with your expectations, says Daniel, “A good outsourcing partner will be committed to adding value at every step of your project, so get details on their method and frequency of capturing feedback, whether the goals they set are realistic and achievable, and how they manage resource allocation on projects.”

Brian also recommends quizzing outsourcing providers about their recruitment and hiring process to ensure that you’ll be gaining access to reliable and skilled experts, “It’s easy for an outsourcing provider to say they have the best people, so it’s important to probe a little deeper. How experienced are their experts? How are they ensuring their talent is keeping up to date? What is their process for vetting new candidates? All these questions will help to gain an insight into an outsourcing provider’s quality bar – and whether it’s up to your standard.”

Assess value for money

For most IT leaders, cost is one of the most decisive factors when engaging any service; however,
when looking for an IT outsourcing partner, it’s critical to consider more than just a provider’s pricing model. “Contractual comprehensiveness and flexibility should always be taken into account,” says, Brian. “A contract that is vague can result in ‘scope creep’ and unexpected costs, while a rigid contract can tie businesses into a partnership that’s not adding value.” He adds, “Ultimately, it comes down to attitude, a good outsourcing provider can quickly become a great business partner when they go the extra mile.”

Daniel agrees and advises that IT leaders take a holistic view when weighing up potential outsourcing partners, “Look beyond your initial project, or resource requirements and consider where your business is heading and whether your shortlisted providers can bring in the skills and services you need. After all, a truly successful outsourcing partnership is one that can be relied on for the long haul.”

Looking for an outsourcing partner to help with your network operations? Contact our expert team today.

UCLH’s ‘Find and Treat’ team to screen & treat homeless via eco-tricycle

UCLH’s ‘Find and Treat’ team to screen & treat homeless via eco-tricycle

Thousands can now be screened by UCLH’s ‘Find and Treat’ team for illnesses including tuberculosis, HIV and Covid-19

Doctors can now cycle around London to treat homeless and marginalised patients via eco-tricycle, the UK’s first fold-out health clinic on wheels.

The tricycle, nicknamed the “Electric Trike”, will be used by the University College London Hospital’s “Find and Treat” team to screen and treat the most vulnerable of the UK’s population for illnesses including tuberculosis, HIV and Covid-19. According to the Evening Standard, a lack of documentation typically prevents these vulnerable communities from accessing a GP or visiting A&E, resulting in living with untreated illness. Outreach workers will be supporting the “Find and Treat” team by using their own experiences of homelessness to encourage others to use this service.

CACI is a proud sponsor of the eco-tricycle, and has supported the UCLH’s “Find and Treat” team by developing an application, ITRICS, that equips them with the latest secure connectivity and cloud technologies. This technical solution supports the workflow of a real-time end-to-end process from diagnosis to treatment, with ongoing enhancements to ITRICS projected to continue into summer 2023.

The “Find and Treat” team will now be able to deliver high-quality care to higher-risk communities in an eco-friendly capacity through these “smart connectivity” capabilities.

UX: Let’s make tech accessible

UX: Let’s make tech accessible

disabled tech

It’s not a new concept: from lifts on the Underground to ramps into public buildings, we’re all used to seeing the real-life equivalent of accessibility features as we go about our day. Airbnb hosts are encouraged to list any issues or benefits on their ads. Public buildings and new built spaces are expected to take disabled visitors’ needs into account as well.

However, challenges still prevail, both in technology and in real life. Despite the fact that over 10 million people (over 18% of the population) have a limiting long-term illness, impairment or disability, they are often simply forgotten.

As in life, so it is online

Like restaurants that have invested in wheelchair ramps but hidden them at the back of the building, lots of ‘real life’ and online places are technically accessible. But the extra time and effort needed to use it means the problem isn’t really being solved and disabled people are still being excluded.

In fact some measures seem to have been taken with an insultingly thoughtless, check-box mentality. In June 2022, Wireless Festival at Crystal Palace decided to pitch the accessible viewing platform at the top of a hill to save money, requiring patrons’ friends to push their wheelchairs up a 10% incline or carry them! I wonder how many websites are similarly inconsiderate of actual needs for certain users.​​​​​​​​​​​​​​

On the other hand, treasured old buildings and ancient pieces of tech alike were often simply not built with accessibility in mind. When visiting Madame Tussauds with a friend who walks with a stick and finds stairs agonising, we used a total of 4 randomly located lifts to access 5 floors. They required us to weave through exhibits the wrong way and wait around for staff help. As a mind-bending response to a building that’s almost two hundred years old it’s better than nothing, but nobody would design it that way if they’d thought about accessibility first. ​​​​​​​

Online leads the way

Online systems that are built first and add accessibility only once the product is complete face similar risks. The infrastructure of our lives is no longer solely built around physical spaces: it’s built around online ones too, where we now conduct every conceivable part of our lives. According to a Deque survey and research, 73% of accessibility professionals saw an increase in accessibility awareness on digital channels throughout the pandemic. Not being able to access these spaces can hugely restrict access in their lives, restricting them from opportunities. ​​​​​​​

Actively discriminating against anyone is of course illegal – and there can be hefty fines and reputational damage for not adhering to WCAG standards. What’s often forgotten is that systems that don’t think about disabled users ultimately exclude by default. It’s worth remembering that anyone can become disabled, even if it’s just a broken arm that restricts typing for six weeks or an ear infection that leaves you temporarily deaf. More than that, accessibility features benefit all users such as captions on video content benefitting a user in a noisy office. We all win when accessibility is considered. ​​​​​​​

Value UX and value your users

Code is easier to rework than bricks and mortar. But what’s easiest of all is building things right from the beginning . Understanding that all users need an equally positive experience is crucial.

Karen Hawkins of eSSENTIAL Accessibility, the world’s #1 Accessibility-as-a-Service platform, has emphasised the importance of making sure ‘foundational elements are as accessible as possible, these foundational elements being colours, but also typography, small atoms and molecules, like your buttons and your links and your text boxes – they get used everywhere’.

Adopting the right mindset where accessibility is the default and not a bolt-on is an ideal way to start. Don’t stop at whether it is possible for a disabled user to complete a task – also consider how easy and fast it is too. ​​​​​​​

Ask your customers about their disabled user base and see if you can speak to disabled users as part of gathering requirements. However, they may not have the best visibility of such users – in fact the customer may not have put any thought into accessibility at all. This can be an area where tech developers can provide leadership as well as creative ideation about the potential needs of unknown users.

Specific accessibility features might include using subtitles or transcripts for all video content. Or it could involve using a high contrast ratio between text and background, relying on more than just colour to convey important information. Furthermore – do things like screen readers work accurately? Will the screen flash causing fits in some users? How about automatic log outs due to inactivity – which could impact users with movement issues, who may take longer completing forms? Will the complexity of any language be difficult for some users? Considering and including these features from the onset as well as testing them on users with disabilities can save time and money later on.

​​​​​​​​​​​​​​Accessibility is about so much more than speaking to any one user: it’s about challenging your expectations of who will ultimately end up using your product. Tim Berners-Lee, the intervenor of the internet, said that ‘The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.’ A software product is only as good as its end users find it to be: design that needlessly excludes potentially 20% of the working population should be seen as a failure. Design that includes everyone is the ultimate success.
To find out more about our capabilities in this area, please check our Digital Design, Build & Operate page.

How much design is enough?

How much design is enough?

Imagine two people are decorating houses, side by side. One wants every detail mapped out in advance, researching all the possibilities and putting in a massive order before seeing anything in person. The other prefers a more spontaneous approach. They might have a vague outline of the sort of house they’d like, but they’d prefer to make it up as they go along.

As things come together, the first person realises that nothing they’ve committed to quite looks or goes together in the way they imagined and there’s no real turning back. The second has a rather more chaotic process, but everything that goes into their house is absolutely fabulous. It’s only at the very end that they realise they have painted the same room seven different colours throughout the process.

These ways of thinking shape more than just our interior décor – they crucially apply to how we understand tech and software development. Committing to a large amount of architecture before kicking off is no longer considered best practice, but including it is still vitally important. Architects, developers and potential clients are left to decide – how much design is enough?

Getting it wrong

Without architecture, the bigger picture quickly gets lost. For instance, a developer might be working on new functionality that will be shared to various departments. Developing it for one customer in one department is fairly straightforward. However – have they considered all of the flows and interactions with other parts of the business? Is there a potential to consolidate some functions into a shared one stop shop service?

Architecture

Good architecture provides an awareness of dependencies, interactions and other contextual drivers, like legacy systems and stakeholder mapping. If you want something that’s more than the sum of its parts, it’s essential.

Too much upfront design though, creates a very long feedback loop where you’ve built half a system before you have any clue if any of it works. In the worst cases, “solutioneering” takes over and the design itself – sometimes pre-issued by the client, with tech already decided – becomes more important than understanding and meeting the requirements. By that point, whether or not it actually benefits the end user has probably been completely forgotten.

Most often, things go wrong when architects and developers don’t talk to each other. Each withdraws into an ivory tower and fails to communicate or remember the benefits of collaboration. As a formalised process, architecture can become too distant from the reality of building it and too rigid to flex to new information that arises from agile iterations.

How do we get it right?

​​​​​​​Agile has taken over – and architecture must flex to fit in. This means greater levels of collaboration, working hand in hand with development teams.

working hand in hand

Breaking up the architecture approach so that it’s completed in segments that align with actual development can keep the process one step ahead of the actual build while ensuring it’s still adaptable. This can also allow both sides of the work to both validate and verify: build the right thing via architecture that focusses on big picture goals, the right way through feedback focussed iterations. Features will not just be effective in their immediate goal but in the broader context of the software.

Architectural principles and patterns can also be vitally helpful by collaboratively establishing the broad guidelines for architectural decisions that will be made later on. To go back to our house designing metaphor, you might not decide exactly what furniture is going into each room, but you might decide on distinct colour schemes that harmonise with each other.

Together, principles and patterns keep services and features aligned and consistent. Not every detail is planned out, but there will be a clear understanding of how things like naming conventions and interactions will be done and how users will be authenticated. That can be easily replicated in the future while still leaving flexibility around it.

At its best, architecture works in harmony with other delivery roles, working toward the same goal and focussing on software that solves problems for the client and the end user. Balancing development and architecture means finding effective methods to maximise both capabilities and harmonising with each other. In this, as in most other things, teamwork and collaboration is key.

To find out more about our capabilities in this area, check out our IT Solution Architecture & Design page.

 

How ethical is machine learning?

How ethical is machine learning?

We all want tech to help us build a better world: Artificial Intelligence’s use in healthcare, fighting human trafficking and achieving gender equity are great examples of where this is already happening. But there are always going to be broader ethical considerations – and as AI gets more invisibly woven into our lives, these are going to become harder to untangle.

What’s often forgotten is that AI doesn’t just impact our future – it’s fuelled by our past. Machine learning, one variety of AI, learns from previous data to make autonomous decisions in the present. However, which parts of our existing data we wish to use as well as how and when we want to apply them is highly contentious – and it’s likely to stay that way.

A new frontier – or the old Wild West?

For much of human history, decisions were made that did not reflect current ideals or even norms. Far from changing the future for the better, AI runs the risk of mirroring the past. A computer program used by a US court for risk assessment proved to be highly racially biased, probably because minority ethnic groups are overrepresented in US prisons and therefore also in the data it was drawing conclusions from.

This demonstrates two dangers: repeating our biases without question and inappropriate usage of technology in the first place. Supposedly improved systems are still being developed and utilised in this area, with ramifications on real human freedom and safety. Despite its efficiencies, human judgement is always going to have its place.​​​​​​​

The ethics of language modelling, a specific form of machine learning, are increasingly up for debate. At its most basic it provides the predictive texting on your phone, using past data to guess what’s needed after your prompt. On a larger scale, complex language models are used in natural language processing (NLP) applications, applying algorithms to create text that reads like real human writing. We already see these in chatbots – with results that can range from the useful to the irritating to the outright dangerous.

At the moment, when we’re interacting with a chatbot we probably know it – in most instances the language is still a little too stilted to pass as a real human. But as language modelling technology improves and becomes less distinguishable from real text, the bigger opportunities – and issues – are only going to be exacerbated.

Where does the data come from?

GPT-3, created by OpenAI, is the most powerful language model yet: from just a small amount of input, it can generate a vast range, and amount, of highly realistic text – from code to news reports to apparent dialogue. According to its developers ‘Over 300 applications are delivering GPT-3–powered search, conversation, text completion and other advanced AI features’.

And yet MIT’s Technology Review described it as based on ‘the cesspits of the internet’. Drawing indiscriminately on online publications, including social media, it’s been frequently shown to spout racism and sexism as soon as it’s prompted to do so. Ironically, with no moral code or filter of its own, it is perhaps the most accurate reflection we have of our society’s state of mind. It, and models like it, are increasingly fuelling what we read and interact with online.​​​​​​​

​​​​​​​Human language published on the internet, fuelled by algorithms that encourage extremes of opinion and reward anger, has already created enormous divisions in society, spreading misinformation that literally claims lives. Language models that generate new text indiscriminately and parrot back our worst instincts could well be an accelerant. ​​​​​​​

The words we use

Language is more than a reflection of our past; it shapes our perception of reality. For instance, the Native American Hopi language doesn’t treat time in terms of ‘chunks’ like minutes or hours. Instead they speak, and indeed think of it, as an unbroken stream that cannot be wasted. Other examples span across every difference in language, grammar, sentence structure – both influencing and being influenced by our modes of thinking.

The language we use has enormous value. If it’s being automatically generated and propagated everywhere, shaping our world view and how to respond to it, it needs to be done responsibly, fairly and honestly. Different perspectives, cultures, languages and dialects must be included to ensure that the world we’re building is as inclusive, open and truthful as possible. Otherwise the alternate perspectives and cultural variety they offer could become a thing of the past.

What are the risks? And what can we do about them?

Ethical AI

Language and tech are already hard to regulate due to the massive financial investment required to create language models. It’s currently being done by just a few large businesses that now have access to even more power. Without relying on human writers, they could potentially operate thousands of sites that flood the internet with automatically written content. Language models can then learn what characteristics result in viral spread and repeat, learn from that, and repeat, at massive quantity and speed.

Individual use can also lead to difficult questions. A developer used GPT-3 to create a ‘deadbot’ – a chatbot based on his deceased fiancée that perfectly mimicked her. The idea of chatbots that can mask as real, live people might be thrilling to some and terrifying to others, but it’s hard not to imagine feeling squeamish about a case like that. ​​​​​​​

Ultimately, it is the responsibility of developers and businesses everywhere to consider their actions and the future impact of what they create. Hopefully positive steps are being made. Meta – previously known as Facebook – has taken the unparalleled step of making their new language model completely accessible to any developer, along with details about how it was trained and built. According to Meta AI’s managing director, ‘We strongly believe that the ability for others to scrutinize your work is an important part of research. We really invite that collaboration.’

The opportunities for AI are vast, especially where it complements and augments human progress toward a better, more equal and opportunity-filled world. But the horror stories are not to be dismissed. As with every technological development, it’s about whose hands it’s put it in – and who they intend to benefit.

To find out more about our capabilities in this area, check out our DevSecOps page.

 

What can a Digital Twin do for you?

What can a Digital Twin do for you?

Digital Twin

Meaningfully improving your organisation’s operations sometimes requires more than just tinkering: it can require substantial change to bring everything up to scratch. But the risks of getting it wrong, especially for mission critical solutions depended on by multiple parties, frequently turn decision makers off. What if you could trial that change, with reliable predictions and the potential to model different scenarios, before pushing the button?

CACI’s Digital Twin offers just that capability. Based on an idea that’s breaking new ground from businesses like BMW to government agencies like NASA, it gives decision makers a highly accurate view into the future. Working as a real-time digital counterpart of any system, it can be used to simulate potential situations on the current set up, or model the impact of future alterations.

Producing realistic data (that’s been shown to match the effects of actual decisions once they’ve been undertaken), this technology massively reduces risk across an organisation. Scenario planning is accelerated, with enhanced complexity, resulting in better alignment between decision makers.

What are Digital Twins doing right now?

From physical assets like wind turbines and water distribution, Digital Twins are now being broadly used for business operations, and federated to tackle larger problems, like the control of a ‘smart city’. They’re also being used for micro-instances of highly risky situations, allowing surgeons to practice heart surgery, and to build quicker, more effective prototypes of fighter jets.

Recently, Anglo American used this technology to create a twin of its Quellaveco mine; ‘digital mining specialists can perform predictive tests that help reduce safety risks, optimise the use of resources and improve the performance of production equipment’. Interest is increasingly growing in this tech’s potential use within retail, where instability from both supply and demand sides have been causing havoc since the pandemic.

This technology allows such businesses to take control of their resources, systems and physical spaces, while trialling the impact of future situations before they come to pass. In a world where instability is the new norm, Digital Twins supersede reliance on historical data. They also allow better insight and analysis into current processes for quicker improvements, and overall give an unparalleled level of transparency.

Digital twin data visual

Where does Mood come in?

Mood Software is CACI’s proprietary data visualisation tool and has a record of success in enabling stakeholders to better understand their complex organisations. Mood is crucial to CACI’s Digital Twin solution as it integrates systems to create a single working model for management and planning. It enables collaborative planning, modelling and testing, bringing together stakeholders so they can work to the same goals.

Making effective decisions requires optimal access to data – and the future is one area we don’t have that on. But with Digital Twin technology, you are able to draw your own path, and make decisions with an enhanced level of insight.

If you’re looking for more on what Digital Twin might be able to do for you, read ‘Defence Fuels – Digital Twin’. In this white paper we show how we’re using Digital Twin to make improvements worth millions of pounds.

How to create a successful M&A IT integration strategy

How to create a successful M&A IT integration strategy

IT integration woman looking at laptopFrom entering new markets to growing market share, mergers and acquisitions (M&As) can bring big business benefits. However, making the decision to acquire or merge is the easy part of the process. What comes next is likely to bring disruption and difficulty. In research reported by the Harvard Business Review, the failure rate of acquisitions is astonishingly high – between 70 and 90 per cent – with integration issues often highlighted as the most likely cause.

While the impact of M&A affects every element of an organisation, the blending of technical assets and resulting patchwork of IT systems can present significant technical challenges for IT leaders. Here, we explore the most common problems and how to navigate them to achieve a smooth and successful IT transition.

Get the full picture

Mapping the route of your IT transition is crucial to keeping your team focused throughout the process. But you need to be clear about your starting point. That’s why conducting a census of the entire IT infrastructure – from hardware and software to network systems, as well as enterprise and corporate platforms – should be the first step in your IT transition.

Gather requirements & identify gaps

Knowing what you’ve got is the first step, knowing what you haven’t is the next. Technology underpins every element of your business, so you should examine each corporate function and business unit through an IT lens. What services impact each function? How will an integration impact them? What opportunities are there to optimise? Finding the answers to these questions will help you to identify and address your most glaring gaps.

Seize opportunities to modernise

M&A provide the opportunity for IT leaders to re-evaluate and update their environments, so it’s important to look at where you can modernise rather than merge. This will ensure you gain maximum value from the process. For example, shifting to cloud infrastructure can enable your in-house team to focus on performance optimisation whilst also achieving cost savings and enhanced security. Similarly, automating routine or manual tasks using AI or machine learning can ease the burden on overwhelmed IT teams.

Implement strong governance

If you’re fusing two IT departments, you need to embed good governance early on. Start by assessing your current GRC (Governance, Risk and Compliance) maturity. A holistic view will enable you to target gaps effectively and ensure greater transparency of your processes. In addition to bringing certainty and consistency across your team, taking this crucial step will also help you to tackle any compliance and security shortfalls that may result from merging with the acquired business.

Clean up your data

Managing data migration can be a complex process during a merger and acquisition. It’s likely that data will be scattered across various systems, services, and applications. Duplicate data may also be an issue. This makes it difficult to gain an updated single customer view, limiting your ability to track sales and marketing effectiveness. The lack of visibility can also have a negative impact on customer experience. For example, having two disparate CRM systems may result in two sales representatives contacting a single customer, causing frustration and portraying your organisation as disorganised. There’s also a significant financial and reputational risk if data from the merged business isn’t managed securely. With all this in mind, it’s clear that developing an effective strategy and management process should be a key step in planning your IT transition.

Lead with communication

Change can be scary, and uncertainty is the enemy of productivity. That’s why communication is key to a successful merger and acquisition. Ensuring a frequent flow of information can help to combat this. However, IT leaders should also be mindful of creating opportunities for employees to share ideas and concerns.

If you are merging two IT departments, it is important to understand the cultural differences of the two businesses and where issues may arise. This will help you to develop an effective strategy for bringing the two teams together. While championing collaboration and knowledge sharing will go a long way to helping you achieve the goal of the M&A process – a better, stronger, more cohesive business.

How we can help

From assessing your existing IT infrastructure to cloud migration, data management and driving efficiencies through automation, we can support you at every step of your IT transition.

Transitioning your IT following M&A? Contact our expert team today.

Eight crucial steps for Telcos to get TSR ready

Eight crucial steps for Telcos to get TSR ready

Following the introduction of the Telecommunications (Security) Act into UK law in late 2021, all telecommunications providers will soon need to comply with ‘one of the toughest telecoms security regimes in the world’ or risk financial penalties up to £10m.

With the clock counting down for Telcos to enter a new era of security, we consider the critical steps for providers to prepare for the regulatory road ahead.

1. Identify your gaps

Understanding your current state is the first step in achieving a successful transformation. A full audit of your security strategies, plans, policies, and effectiveness will expose your weaknesses and gaps, enabling you to take the right actions to protect your business and ensure compliance.

2. Prioritise your most pressing threats

While gathering data can provide better visibility of your network, taking reactive action to lower your risk isn’t the most efficient approach. Establishing levels of prioritisation will ensure your resources are being used to reduce risk in the right areas.

3. Get the right people in place

From gap analysis to operating model design, programme delivery, and reshoring, it’s likely you’ll need more people in place and new competencies developed. Getting the right partnerships and people now is key to getting ahead.

4. Incorporate legacy issues into your planning

Today’s telecommunications industry is built on multi-generational networks, and legacy systems continue to underpin critical infrastructure. While extracting these systems is not going to happen overnight, dealing with your legacy infrastructure should be an integral part of planning your implementation of the new Telecoms Security Framework.

5. Implement transparent designs

Failing to disclose evidence of a breach could result in a £10m fine, so built in transparency and traceability are key to your programme. Consider the likely information requests that are to come to ensure your design changes enable clear tracking and reporting.

6. Embed a security-first focus

Mitigating the risks facing the UK’s critical national infrastructure is the driving force behind the TSRs, and telecommunications providers will need to ensure that this mindset is embedded in the everyday. Buy-in from the business is core to any cultural shift, so align your leadership with a shared, cross-functional vision and get some early delivery going to build gradual momentum.

7. Prepare for more legislation

In November 2021, the Government announced The Product Security and Telecommunications Infrastructure Bill (the PSTI) to ensure consumers’ connected and connectable devices comply with tougher cybersecurity standards. As cybersecurity evolves, so will the threats to organisations, and telecommunications providers must be prepared for more regulatory oversight.

8. Embrace the benefits of built-in security

Ultimately, security that is built in rather than bolted on will enable providers to offer better protection and performance for customers, as well as foster trust with greater transparency. While the industry may not have been seeking the Telecoms Security Act, its passing prompt action to remove the constraints of old and reimagine and reshape to seize the opportunities of a new era.

For more information about TSR, download The impact and opportunities of the Telecoms Security Requirements report.

7 key things you need to know about the Telecoms (Security) Act

7 key things you need to know about the Telecoms (Security) Act

The introduction of The Telecommunications (Security) Act into UK law late last year marked the arrival of a new era of security for the telecommunications sector, where everyone – from executive to employee – is responsible for protecting the UK’s critical network infrastructure against cyber attacks.

However, embedding a security conscious culture from top to bottom requires significant resource and expertise to steer towards success. With the clock already counting down, telecommunications providers are under pressure to begin their TSR compliance journey whilst ensuring that existing change programmes stay on track. Here, we consider the key considerations for communications leaders to ensure successful navigation and utilisation of the obstacles and opportunities that lie ahead.

Clear visibility is critical

Protecting your network, applications and data has never been more critical. However, blind spots, missing data, and the risk of dropped packets make management and protection of these challenging, not to mention the scale and complexity of many providers’ hybrid network infrastructure. Nonetheless, providers must ensure they are able to monitor security across the entirety of their network and can act quickly when issues arise.

Security and service quality will need to be carefully balanced

Whilst enhancing security is the ultimate goal of the Act, this cannot be at the cost of network performance. Outages themselves can put providers in breach of the regulations.

Security scanners are a key line of defence for network security, helping to identify known vulnerabilities which can be exploited if the correct mitigation steps aren’t followed, so ensuring you have a robust vulnerability management process is critical. Incorporating the right vulnerability scanning tools and following the required change management processes to correctly implement tools will help to secure your network whilst minimising any potential performance impact to your existing infrastructure or service outages.

Auditing abilities are a new superpower

Demonstrating compliance with the new legislation may pose a significant challenge to providers, particularly as they attempt to flow down security standards and audit requirements into the supply chain. However, implementation of robust auditing processes to identify and eliminate weaknesses and vulnerabilities are a must for keeping providers on the right side of the regulations.

Knowledge is power

With any significant legislature change comes a period of uncertainty as businesses adapt to change, so getting to grips with the new regulation changes ahead of the game is key. Many providers have already begun the search for talent with the technical skills and experience to deliver their TSR programmes; however, with the jobs market at boiling point, some providers may find utilising external partnerships provides a more practical route to successful delivery as well as a means to upskill and educate internal teams.

You’ll be tested

In 2019, OFCOM took over TBEST – the intelligence-led penetration testing scheme – from DCMS and has been working with select providers on implementation of the scheme. Whether through TBEST or not, providers will be expected to carry out tests that are as close to ‘real life’ attacks as possible. The difficulty will be in satisfying the requirement that “the manner in which the tests are to be carried out is not made known to the persons involved in identifying and responding to security compromises.”[1] Providers may need to work with an independent vendor to ensure compliant testing.

Costs are still unclear

While the costs for complying with the new regulations are still undermined, an earlier impact assessment of the proposed legislation carried out by the government indicated that initial costs are likely to be hefty: “Feedback from bilateral discussions with Tier 1 operators have indicated that the costs of implementing the NCSC TSR would be significant. The scale of these costs is likely to differ by size of operator and could be of the scale of over £10 million in one off costs.”[2].

Culture may challenge change

Technology will, of course, be at the forefront of communications leaders’ minds, yet the cultural changes required to successfully embed a security-first mindset are of equal importance and must be considered in equal measure. Change is never easy, particularly when there is a fixed deadline in place; however, delivery that is well-designed and meticulously planned is key. Ultimately, the onus will be on leaders to craft a clear vision – achieving network security that is intrinsic by design – as well as mapping out the road to get there.

Looking for more information about TSR? Download The impact and opportunities of the Telecoms Security Requirements report.

 

[1] The Electronic Communications (Security Measures) Regulations 2021 [draft] 

[2] The Telecommunications Security Bill 2020: The Telecoms Security legislation 

Reporting-ms Performance Issues – Solution Overview

Reporting-ms Performance Issues – Solution Overview

The Problem

Picture the scene; a new application has been carefully developed and nurtured into existence. It has been thoroughly tested in a small test environment, and the team are awaiting its first deployment to a multi-pod environment with bated breath. The big day comes and…. the CTF tests immediately highlight severe issues with the application, cue tears.

Naturally, this was not the fanfare of success the team had been hoping for. That being said, there was nothing for it but to dive into everyone’s favourite debugging tool… Kibana.

Checking the logs in Kibana confirmed the team’s worst fears. Audit messages were being processed by the app far too slowly.

Figure 1 below shows audit events taking up to 16.5s to be processed. This process time should (under normal circumstances) be under 0.1s per message.
• Pods were seen to be idle (while under load) for up to a minute at a time.

Audit message performance issues

Background

Performance problems are never fun. The scope tends to be extremely wide and there is no nice clean stack trace to give you a helping hand. They often require multiple iterations to fully solve and more time investigating and debugging than implementing a fix.

In order to know where to begin when looking at the performance of an application, it’s important to understand the full flow of information throughout the app so that any critical areas/bottlenecks could be identified.

• In this scenario the diagram shown in Figure 2 represents the key processes taking place in the app. The producer of the messages (audit events) fires them onto a Broker (a Kafka queue), from which they are consumed by the application (reporting-ms) and stored in a database.

Reporting-ms Design Overview

Each of these areas could be to blame for the slow processing times and so it was important that each was investigated in turn to ensure the appropriate solution could be designed.

The steps decided for the investigation were therefore:

  • Investigate audit-enquiry performance
    • This app is the producer of the messages for the CTF tests – if this application takes too long to put the messages onto the broker, it would increase the time taken for a full set to be processed and committed.
  • Investigate pod performance
    • These pods allocate resource to reporting-ms – if the app is not allocated enough memory/CPU then it would cause performance issues,and explain the issue of idle pods under load.
  • Investigate database load
    • As this is Aurora based, we have a separate instance for reading/writing. It is important to understand whether these are resource bound to ensure adequate performance takes place.
  • Investigate application (reporting-ms) performance
    • The application parses the incoming messages and prepares them for storage in the database. If there are inefficiencies in the code this could also decrease performance.

The investigation

  • Audit-enquiry
    • 100s of audit messages could be added per second meaning this was ruled out as the cause of the issues.
  • Pod Resource Allocation
    • Looking at metrics in Grafana it was possible to see that the pods never hit more than 60% of their allocated CPU/memory thresholds, meaning they were not resource bound.
  • Database load
    • As seen in Figure 3, it is possible to see that the database was hitting 90% or more CPU usage during the CTF tests, indicating the issue lay in the area between reporting-ms and the database.

Database load (CPU Usage) over time

As mentioned above, no stack trace was available for the team to rely on. No exceptions were being fired and other than slow processing times all seemed to be well with the application. The only way forward was to create our own set of logging to try and peel back the layers of pain and decipher what area required a quick amputation.

The first iteration of this logging did not go well. The key processes had been identified, each one surrounded by a simple timer, and a flashy new set of logs were being seen in the console output. Unfortunately, 99% of the time was appearing under one of these processes, and database reads/writes were seemingly instantaneous. This, as one of the team more eloquently described at the time, did not seem right (rephrased for your viewing), especially considering the extreme CPU usage seen above. Once again, the tool shed had to be reopened, spades retrieved, and more digging into the problem begun.

The cause of our logging woes turned out to be an @Transactional annotation that surrounded our audit-message processing. This processing prevented the actual commits to the database from taking place on the areas indicated in the code itself. With this in mind, the second iteration of logging was released, and a much more insightful set of metrics could be seen in the console log. The output of these metrics can be seen below in Figure 4.

  • Reporting-ms
    • The statistics mentioned above were added to track performance time of key processes inside the app. As seen in Figure 4, the database retrieval time was almost 90% of the total processing time, further validating our theory that this was the cause of the issue.
    • By turning on Hibernate’s ‘show-sql’ feature it was possible to see that 20+ select queries were run per processed message. The cause of this was eager loading on the entities (all information linked to the incoming message was being retrieved from the database).

Performance metrics

The solution/results

With the investigation complete and the root cause revealed, the team eagerly set upon implementing a solution.

The resolution in this scenario was to add a lazy-load implementation to the entities within reporting-ms and cache any of the key lookup tables that weren’t too large/unlikely to change. By adding this strategy, the number of select queries run against the database per message were reduced from 20+ to 5 or less.

A day or two later and application_v2 was ready for deployment. Thankfully (for the sake of our mental well-being), the CTF results returned a much more positive outlook this time! More in-depth results of this implementation can be seen in some of the below screenshots of database CPU usage and average audit message processing times. In particular, the 90% reduction in processing time was brilliant to see and the team were delighted with the corresponding time saves and improvements in the test runs.

Database CPU Utilisation

Total processing time

Find out more about our secure application development capabilities.