Jack Danahy: Today, I’m speaking with Justin Fimlaid about the choices that he and the team at NuHarbor Security have made in selecting the technologies that support the services that they offer.

Justin, if one looks at Forrester or Gartner analysts for their expectations of what a great service provider will be, a lot of times they talk about the existence of a curated tech stack. They’ve seen security providers fail because they try to manage every kind of security technology, or try to ingest information from everyone, and they recognize the problems that this “come as you are” approach can cause.

Clearly, NuHarbor took a very different tack to create critical services based on technologies that you and the team believed were best in market. These were solutions that provided you with the unique capabilities required to deliver your consistently high level of service. What I’d like to discuss today are the reasons why you have made these particular choices and what some of the impacts have been for NuHarbor’s customers over the years. To start, I’m going to start at what I see as the architectural foundation of the service.

Splunk

NuHarbor chose Splunk as its core platform for supporting data collection and aggregation.  What were the reasons that you chose Splunk, and what were the unique capabilities you thought were necessary?

Justin Fimlaid:   I founded NuHarbor as an end-to-end cybersecurity solutions provider in 2014.  My goal, and the driver of our strategy in picking technologies for our portfolio, is to advise and support chief information security officers (CISO’s) and chief information officers (CIO’s) in delivering effective enterprise security programs for their businesses and institutions.

The cybersecurity marketplace, in the most general terms, is very fragmented from both a services and product standpoint. Having been a former CISO, I’ve built security teams, delivered enterprise security programs, and worked with vendors and partners in the cybersecurity space.  In the process, I personally experienced the challenges of this technical complexity early on, especially as I started to consume some of these services. I understood what it meant to purchase these solutions as a consumer, and I was left wanting more. I saw that some of these providers were limited by a lack of technology, or talent, or something else.

So, when we were starting the company, it was most fundamentally an effort to address the need for data analytics to paint a comprehensive and comprehensible cybersecurity to create the security outcomes I felt the industry needed. As a result, we chose Splunk as our data analytics platform. Having worked with Splunk in the past as a customer, I was intimately familiar with deploying that platform at an enterprise scale for cyber.  More importantly, though, Splunk offered a foundation that was flexible, extensible, scalable, and manages data really well. Whether we were talking about ten gigabytes in size, or ten terabytes in size, the architecture of the platform allowed us to essentially scale solutions based on the size of our clients.

The other piece that was critical for us was the flexibility of Splunk data ingest. Being able to run custom advanced correlations against data from unique technologies was something I found in Splunk and not in others at the time. A good example was running advanced correlations against data supplied by mainframes. If you have ever worked with an IBM as400 system or something like it, you know that it’s incredibly difficult to get data from it.  When you can, for the purposes of security investigations or correlations, it’s offering information in a proprietary language. If the mainframe gives you a code of one, two, or three, that value means a isn’t meaningful until the machine status is run against the right decoder ring.  That decoding allows you to determine that if it’s a one, maybe it’s a fatal error.  If it’s a two, you need to run a debug sequence? If it’s three, it means something else entirely.

So Splunk offered us the data set extensibility that we needed to run necessary complex correlations across the many systems of the various clients we served.  We were able to integrate information from the technologies that they came to us with, whether they were a Fortune 100 company or a large state government. We were uniquely able to do security investigations and meaningful monitoring of those platforms.  This as the second appeal for Splunk.

The third appeal was the Splunk concept of “adaptive response”. This predates the current concept of security orchestration, automation, and response (SOAR), but in the early 2010s, I was convinced about the value of your security analytics platform being the virtual “nervous system” of your entire security operation. The right security analytics platform should be able to see everything that is gathering data or providing control within your enterprise security program.  In our case, Splunk provided the foundation for our service infrastructure so that it would see the data coming in, offer the ability to run correlations informed by new threats or past experience, and then create and enrich alerts that were generated.  Having that as a foundation early on enables us to formulate the right types of adaptive response for multiple types of events.

As an example, consider what happens when an endpoint alert is generated, and now you know that something is happening on an endpoint. Let’s just say you also wanted to do a forensic collection, or you wanted to look at the disk.  Using that alert as a trigger, you can build those actions into Splunk.  So, generally, when an alert fires or an event triggers, you can specify actions to be taken in a way that saves your staff time and improves your consistency.  That was very appealing to me. That consistency of results, speed of response, and the application of Splunk as the security program nervous system was what we hung our hat on pretty early in the company’s existence. We were regarded as forward-looking, progressive about how we went to market, and we were doing this in a way that no one else had done before.

So really, that drove our decision to go with Splunk, and to build our security analytics backbone on top of Splunk. Today, all our services are built on top of Splunk and not only can we see all the data, but we also have the ability to do advanced correlations and take secondary actions through adaptive response.  During the period when we were starting, we predated SOAR and we were designing and building this automation and analytic framework ourselves.

Jack Danahy: That’s great, and your reasons for choosing Spunk make sense:  You looked at the problems you experienced as a practitioner in the space and wanted to create a platform that would allow you to resolve them.  To do it, you needed to offer analytics and actions based on the security data, leading to the choice of a platform that would both scale and accommodate a variety of data types you may not have even identified at the time you started. That architecture and the extensibility made sense. NuHarbor leveraged Splunk’s adaptive response capability to take action in a pre-SOAR environment to either enrich or act on the information received, making the company responsive, thorough, and more efficient.

ThreatConnect

So, that gives us a good picture of your criteria when choosing Splink as the platform to gather the varying kinds of security information related to activity on your customers’ sites. You’ve made another choice, one that informs your capability to know what to look for in all of that data.  While you and the team have uncommon experience in researching new threats and defining correlations to identify them, you also take advantage of ThreatConnect to gather intelligence from multiple external sources. How did you pick ThreatConnect?

Justin Fimlaid: Our approach to ThreatConnect was, a atypical when compared to how we’ve selected some of the other technologies in our portfolio.  Let’s start at the beginning.

The problem with threat intelligence is that it has an especially short shelf life. Even if you’re getting opensource feeds, it’s likely that by the time you receive them, the threat intel that you’re receiving is already stale.  On average, those opensource threat lists are updated only once every 72 hours, although some do so more frequently.

Let’s say your source is in the 72-hour camp; this means that by the time you actually receive that threat intelligence, the threat has been publicly known for days.  You could already have malicious actors knocking on your front door and you would never see it because you didn’t have that threat update in time.  The conclusion we reached was that in our commitment to securing our customers, we needed to do some of this ourselves.   With our growth and our visibility to activity across hundreds of clients, we can see much more across the breadth of their environments, including new threat patterns. We can tell a threat story, and create broadly usable identifications, with an immediacy that open-source feeds can’t match.

So, when we looked to define a current and comprehensive view of the threat landscape, we chose to combine our own threat feeds coupled with a limited number of high-fidelity open source threat feeds.  We combined these two bodies of intelligence for the purpose of then sharing it to protection our clients.   In this case, the obvious choice for us was to use ThreatConnect because it enabled us to do that threat aggregation and pipe it back out to our clients in an orchestrated manner.  The architecture of the platform made it straightforward to enable that capability.

The second element in our choice was an emerging requirement from our customers to provide STIX and TAXII formatted feeds for our threat intelligence.  Our clients recognized the value in our curated feed, and wanted to use it as an input into other security technologies, like their firewalls, endpoints, or web application security solutions.  ThreatConnect offered us that that extensibility in a completely automated way. To bring it all together, the reason we chose ThreatConnect was because they had a good architecture, and they shared our philosophy that threat intel curation is kind of a community game. It gets better when more people participate in it. For us, those people participating are our researchers and our clients, and through ThreatConnect we’re putting all the pieces together and delivering them in the forms that our clients wanted.

Jack Danahy: So, if I combine what you’ve told us about ThreatConnect and the capabilities you described in Splunk, clearly there’s a platform where our New Harbor analysts are able to look at a pretty broad source of data while remaining well informed on things they’re learning from one another, from our clients, and from these third-party feeds. They’re equipped to begin to identify what matters to them, respond quickly, and then share some of that data broadly, which is great.  That increases the value of the analysts through the use of the two platform choices you’ve made, making them more effective and enabling them to be more productive across multiple customers.

There are another four technologies that you selected that I’d like to discuss. These are technologies that provide specific security capabilities, and they aren’t enabling technologies like Splunk or ThreatConnect. I’m going to run them in chronological order, starting at the beginning of NuHarbor time.

Tenable

One of the first things organizations looked to address in security is understanding whether or not the systems that they were using had known vulnerabilities. The ability for an attacker to recognize a vulnerable system and act upon it has been a problem going back to the beginnings of the Internet.   NuHarbor chose Tenable for the critical area of vulnerability management. Can you tell me how you made that decision?

Justin Fimlaid: Yes, believe it or not, we had originally started with Qualys in our very early days, but we found that their consumption model was challenging for us and for our clients.  Outside of the technology, the Qualys strategy for going to market didn’t align with our clients’ preferred consumption model for the technology.  We have always spent significant effort on understanding these non-technical aspects of our customers’ security programs, and this was a mismatch in the market dynamics.

We made the switch over to Tenable for variety of reasons. It started with the Tenable founder and CEO, Ron Gula, with whom I had a good relationship for some time.  If you know Ron or if you have known him, he’s an approachable guy. He actively cares about clients and customers. The couple of times that I’ve gone out to lunch with Ron to talk about Tenable and other business, his questions were always client-oriented, which carries a lot of weight with me. He was tracking revenue because of his role, but his questions were always rooted in whether or not we were doing the best thing for Tenable’s customers. He was always trying to understand their challenges and he wanted to internalize the problems that Tenable was solving for clients, making sure that Tenable was hitting the mark.

In talking with Ron, he seemed to share my core philosophical belief:  If you do everything that you’re supposed to do for the client, if you self-select the right behaviors and try to add as much value to the client as you possibly can, revenue will follow. To me, that really hit home. We’re cut from the same cloth, so NuHarbor chose to switch and adopt a new relationship with Tenable.

During this same period, vulnerability management was going through its own transformation. There was an ongoing architectural transformation, driven by insufficiencies in the existing model.  Existing scanners were appliance-based operated a like a VM or dedicated server instance.  Systems had to be connected to the network, either through VPN or jacked-in locally, in order to be scanned. The resulting scan was only as good as long as the computer was connected.  The gaps that this created meant that appliance-based scanning was transforming into agent-based scanning.  There were beginning to be scanning agents on local endpoints that would scan into a cloud.

In the new and improved model, no matter where that laptop or server existed, it was being scanned locally through an agent, using the local machine to run a scan then uploading the results.   All of the horsepower required to run that scan was generated locally, which meant that scanning 20,000 assets or 40,000 assets would no longer take 2-3 weeks, but could be done in a matter of hours.

In the background, architecturally speaking, that change was impacting the vulnerability management sector. For NuHarbor, this shift was the great equalizer during the switch from Qualys to Tenable.  There was evolving parity among the architectures and vulnerability scanners.  The second reason was that Tenable had their “log correlation engine”. The log correlation engine was taking your asset posture, coupled with your vulnerability management posture, to provide more value. The log correlation capability within Tenable was difficult for clients to configure and manage, and when they could, they were drowning in vulnerability information.

For NuHarbor, our decision to go with Splunk gave us the ability to ingest and contextualize that vulnerability information. We could recreate it in Splunk in a unified way, with other assets and asset information.  We provided the value that the log correlation engine was trying to deliver, and for our clients that had invested in the log correlation engine, we were able to tell them to stick with their Tenable platform.  They could keep their scanning cadence, your patching cadence, and allow us to ingest that information into Splunk as the analytics backbone. It made our analysts better, as we were now able to take real time vulnerability information and correlate it with asset information we already had. To bring it all together, our relationship with Tenable originated and continues to thrive from philosophical elements. We’re both focused on taking care of our clients; working with them over time to build out their capabilities to create robust and scalable vulnerability management programs.

Jack Danahy: That’s the next step, more than the team understanding what to do as they’re watching what’s happening now. They’re able to take a feed that helps them understand where exposures exist and help the teams contextualize it using Tenable through Splunk. It feeds an understanding of how to manage and prioritize vulnerabilities, making the changes that are necessary to address them. Through the decision that you had already made to support Splunk, you were able to mitigate the correlation gap that had arisen with the native log correlation engine from Tenable.

CrowdStrike

Let’s move forward from vulnerability assessment to an area that almost every company has made an investment in; endpoint protection. New Harbor decided to partner up with CrowdStrike, who’s a relatively new entrant into the marketplace, in comparison to some of the older legacy vendors. Can you tell me why CrowdStrike was the right choice for NuHarbor?

Justin Fimlaid: Yes, the market is crowded and it’s tough to pick a solution with meaningful differentiation. At the time we chose to go with CrowdStrike, there was a recognition that our organization needed to do more on the endpoint and with endpoint data. Prior to doing anything with CrowdStrike, I operated with an expectation that the endpoint was always compromised, because the AV market, and the next gen AV market, in terms of the Cylances and Carbon Blacks, were still in their early versions. There was considerable noise coming from those solutions, and it was it was tough to tell what was going on at any point.

This was especially the case with historical leaders like McAfee and Symantec. Sometimes they alerted, sometimes they didn’t, but organizations always relied on them to have the right signature to block malicious code.  Giving the rapid pace of malware change, it was too hard to figure out what was going on in the endpoint, so, for the sake of simplicity, you’d assume all endpoints were compromised and build your security architecture accordingly. This meant not putting sensitive information on an endpoint. That was just the era we were working in. Looking back on it now, it seems silly, but that’s where we were at.

As the market evolved, EDR got a bit smarter and so did next gen anti-virus. The recognition that we needed to do more with the endpoint was becoming real, and an increasing number of clients were starting to ask for help on how to solve these challenges. This is where CrowdStrike stood out for me, for a couple of reasons.

The first reason was that one of the first discovery calls that I ever took as a CISO was actually with CrowdStrike. I don’t remember the year, but it was right when they were getting started. The premise behind their value was crowdsourcing endpoint and threat telemetry; trying to draw patterns across a population of endpoints. The more endpoints that subscribed to this model, the smarter the model got. I was super intrigued and impressed by this idea.

At that time, though, CrowdStrike wasn’t yet established, and didn’t have a broad base of customers to feed their model.  As a customer, I didn’t end up selecting but I continued to follow their progress.  As we fast-forward to making an endpoint decision at NuHarbor, I was impressed to see how much market momentum CrowdStrike had gained.  Again, the idea of crowdsourcing endpoint threat information was appealing to me.  It’s related to the same benefits we described about ThreatConnect: threat curation is a community game in order to do it well and timely. CrowdStrike played into that concept well.  I liked the fact that CrowdStrike did some of the pre-thinking and alert distillation that other providers did not do.  As an example, if you look at a Microsoft defender, they will give you a heads-up on every alert or event that is being fired within your environment, whether there are false-positives, or events that are more likely to be real issues.

CrowdStrike had done the pre-thinking on what constituted an event. They alerted about things that someone who is busy running security or running IT would really need to pay attention to. This was something I valued when I chose them.

Technically, they took the correlated information from Sysinternals and determined the top 20 events that, if they were flagged, were absolutely security events with nothing false-positive about them. CrowdStrike determined what specific events meant you should basically wrap your building in yellow police tape because a crime has just occurred. CrowdStrike completely took most of this analysis out of platform, which was great because we see a lot of events in the course of the day. It’s refreshing to have a solution saying, if this fires, focus on it, because it’s high fidelity, and you’re not going to get any false positives out of it. It’s a major time saver for us and for our clients. It made it easy for us to integrate and recommend.

There was an other factor in our decision, and that’s the data port replicator within the platform. It’s a feature that allows NuHarbor, as a provider, to harvest that sysmon information off of the endpoint, using the CrowdStrike Falcon agent.

That’s appealing because that information is difficult to get off of an endpoint, as they’re frequently being open and closed, started, restarted, turned off. It’s hard to reliably and consistently get that information off of a laptop, seeing as it’s always moving around.  The data port replicator allowed us to collect that information, pull it into other systems that we manage, and use the endpoint information in order to run advanced correlations, do instant investigations, or tell stories about circumstances from a security standpoint.

in the absence of that feature, getting that endpoint information is a challenge. It’s hard to get, while being difficult and expensive to store. So having that agent makes our lives a bit easier.

Jack Danahy: We’ve talked about security technologies that you’ve picked to enable the team to do their jobs better and more completely, some tech that made information more available to you and, in the case of CrowdStrike, new technology that improves an existing area of protection and also gives new access that helps you to draw more conclusions. These seem to be largely responsive to the threats that our analysts found clients wrestling with, but there are two other companies for us to touch on that I place in the enabler category.

Okta

These are technologies which clients use to create a more secure organizations, as opposed to technologies that respond to threats to their security. The first of these is Okta. In terms of working with federated identity management, and understanding how to manage that across systems, how did you decide on Okta as the right choice for NuHarbor.

Justin Fimlaid: I’ve always supported the position that identity done right will create a better user experience for the customers of our clients. It creates a smoother experience for the employees of our clients, who then create a more seamless experience for the businesses that they work within. On top of that, if done correctly, it enables a business strategy and reduction of risk for our clients that allows them to be more innovative in how they go to market and the business challenges that they can solve. This is especially true for businesses that that are consumer-facing.

When I look at the identity and access management market, I see a significant evolution over the last ten years. At this point, we’re struggling with a fragmented “identity and access management” industry where you had very different types of companies competing to be the source of identity management and trust.

There really are four categories of identity and access management that exist today. There’s the idea of consumer identity, which contains solutions like Ping identity.  There is also the consumer identity that is grounded in a vendor relationship, like Amazon or Starbucks.  It’s nothing permanent or persistent, rather, you use it only to authenticate into your relationship with that vendor.  You prove you are who you say you are, then do a subsequent or secondary action. You’re going to transact and do something and you may not come back for a week, or maybe over a year. That intermittent and transactional quality of identity is what distinguishes consumer identity.

The second identity is workplace identity, which describes a relationship between business users and their interactions with their organizations. They’re logged in every day, and throughout the day, they’re consuming different services. This persistent connection, with varying activity types, creates a different identity category, and this is the category I would put Okta into.

The third identity is more of a role, relating to privilege. These solutions manage the limited number of powerful credentials typically reserved for administrators or people performing  system administration functions.  These solutions protect the credentials that act as keys to the entire companies, and we would put CyberArk into this group.

Finally, the fourth category would be the idea of identity and access governance, which is a critical component of identity management for larger firms. There are regulations that require you to do quarterly user certification reviews requiring assessment of all of your users, and ensuring that they have the minimum permissions necessary for them to do their job. For instance, if I was to move between jobs in a company, I should only have the minimal permissions required to do each the new job I’ve taken, and any access that was only required to do the first job is removed. I would put SailPoint in this access governance group.

Looking at these categories, we can play a role in supporting any of them, but we play heavily in the complicated environment of the workplace.  Clients hire us to help to secure their enterprise.  They are looking to secure their users and protect their credentials, while also protecting the assets and activities being authorized by those credentials. We immediately look at Okta the leader within that space and as an established vendor that has credibility in the market. More than that, we want to help our clients to be more progressive within the identity and access management space for workers.

When I look at what different organizations have gone through over the last three years, I notice that everybody has cloud-first initiatives, and legacy Active Directory structures or LDAP structures being kept on site are becoming less common. Everybody is transferring to the cloud,  and the idea of having a centralized identity store used for cloud-based authentication, cloud-based apps and being able to move that identity store closer to where the apps exist is becoming more popular.  Okta has that structure in place, and they have prebuilt connectors in place, making the connectivity of those cloud applications almost seamless.

In this model you can extend authorization security to your employees, but you can also extend authorization to your partners and you can start to manage and monitor those identities. You begin to couple all those things together. We pay attention to the firms that are leading the market and investing the most into R&D within their platform. After think all of the criteria through, we determined that Okta was a pretty easy choice.

Zscaler

Jack Danahy: All right, that brings us to our last technology partner to round out the list. We’ve done a lot of talking about this partner lately, so we might not need as much detail, but we’re discussing the fact that the world has changed. You mentioned positive criteria for both Splunk and Tenable included their ability to assist in identifying and managing what was going on inside the cloud. As people have gotten more remote, and as the cloud has become more important and user location is less important, Zscaler has taken off in terms of helping people out, making them another partner that you chose for NuHarbor. Can you tell me a little bit about why Zscaler was the right choice to enable secure remote application access and connectivity?

Justin Fimlaid: I think you hit it on the head. The world has changed and everybody’s working remotely. The idea of how work is done has changed, and the concept of connectivity has changed. Zscaler offers an architecture that meets all the needs attached to those changes. When I look at Zscaler, that’s an example of security technology enabling better business processes while saving that business’ money

When clients adopt Zscaler, they know they’re no longer going to pay for expensive MPLS lines orfor extra processing power on a VPN concentrator. With Zscaler they can push out to the edge and create a better user experience. I think that’s the secret and something that IT departments have wanted for a while.

I think one of the most interesting things about Zscaler and the timing of when they came to market was that it was like the Instacart phenomenon. When Instacart came out, I thought it was a pretty good idea, but I didn’t know when I would actually use it.

Zscaler has a similar transformative potential, but widespread adoption and collateral business change isn’t proven yet.  I think the tech is good, the idea is good, and companies just need the right strategy and catalyst to realize the benefits. With Zscaler, like Instacart, the pandemic proved that we need to do something different to enable our clients to protect their employees and users, while also securing the connectivity that ties them all together.  The timing, demand, and utility among our clients made Zscaler a clear choice for NuHarbor.

Jack Danahy: This has been really helpful in understanding the underpinning of the decisions that NuHarbor has made in creating the curated technology stack that we use to serve and support our customers.  The reasons ranged from technical capability, scalability, founder mentality, and timing, to architectures, customer demand, and the market.  There are a lot of good reasons, a lot of different reasons, a lot of good choices. I think that pretty much covers it.

 

Pin It on Pinterest