Server

How To Set Up A Zero-Trust Network

How to set up a zero-trust network

In the past, IT and cybersecurity professionals tackled their work with a strong focus on the network perimeter. It was assumed that everything within the network was trusted, while everything outside the network was a possible threat. Unfortunately, this bold method has not survived the test of time, and organizations now find themselves working in a threat landscape where it is possible that an attacker already has one foot in the door of their network. How did this come to be? Over time cybercriminals have gained entry through a compromised system, vulnerable wireless connection, stolen credentials, or other ways.

The best way to avoid a cyber-attack in this new sophisticated environment is by implementing a zero-trust network philosophy. In a zero-trust network, the only assumption that can be made is that no user or device is trusted until they have proved otherwise. With this new approach in mind, we can explore more about what a zero-trust network is and how you can implement one in your business.

Interested in knowing the top 10 ITAD tips for 2021? Read the blog.

Image courtesy of Cisco

What is a zero-trust network and why is it important?

A zero-trust network or sometimes referred to as zero-trust security is an IT security model that involves mandatory identity verification for every person and device trying to access resources on a private network. There is no single specific technology associated with this method, instead, it is an all-inclusive approach to network security that incorporates several different principles and technologies.

Normally, an IT network is secured with the castle-and-moat methodology; whereas it is hard to gain access from outside the network, but everyone inside the network is trusted. The challenge we currently face with this security model is that once a hacker has access to the network, they have free to do as they please with no roadblocks stopping them.

The original theory of zero-trust was conceived over a decade ago, however, the unforeseen events of this past year have propelled it to the top of enterprise security plans. Businesses experienced a mass influx of remote working due to the COVID-19 pandemic, meaning that organizations’ customary perimeter-based security models were fractured.  With the increase in remote working, an organization’s network is no longer defined as a single entity in one location. The network now exists everywhere, 24 hours a day. 

If businesses today decide to pass on the adoption of a zero-trust network, they risk a breach in one part of their network quickly spreading as malware or ransomware. There have been massive increases in the number of ransomware attacks in recent years. From hospitals to local government and major corporations; ransomware has caused large-scale outages across all sectors. Going forward, it appears that implementing a zero-trust network is the way to go. That’s why we put together a list of things you can do to set up a zero-trust network.

These were the top 5 cybersecurity trends from 2020, and what we have to look forward to this year.

Image courtesy of Varonis

Proper Network Segmentation

Proper network segmentation is the cornerstone of a zero-trust network. Systems and devices must be separated by the types of access they allow and the information that they process. Network segments can act as the trust boundaries that allow other security controls to enforce the zero-trust attitude.

Improve Identity and Access Management

A necessity for applying zero-trust security is a strong identity and access management foundation. Using multifactor authentication provides added assurance of identity and protects against theft of individual credentials. Identify who is attempting to connect to the network. Most organizations use one or more types of identity and access management tools to do this. Users or autonomous devices must prove who or what they are by using authentication methods. 

Least Privilege and Micro Segmentation

Least privilege applies to both networks and firewalls. After segmenting the network, cybersecurity teams must lock down access between networks to only traffic essential to business needs. If two or more remote offices do not need direct communication with each other, that access should not be granted. Once a zero-trust network positively identifies a user or their device, it must have controls in place to grant application, file, and service access to only what is needed by them. Depending on the software or machines being used, access control can be based on user identity, or incorporate some form of network segmentation in addition to user and device identification. This is known as micro segmentation. Micro segmentation is used to build highly secure subsets within a network where the user or device can connect and access only the resources and services it needs. Micro segmentation is great from a security standpoint because it significantly reduces negative effects on infrastructure if a compromise occurs. 

Add Application Inspection to the Firewall

Cybersecurity teams need to add application inspection technology to their existing firewalls, ensuring that traffic passing through a connection carries appropriate content. Contemporary firewalls go far beyond the simple rule-based inspection that they previously have. 

Record and Investigate Security Incidents

A great security system involves vision, and vision requires awareness. Cybersecurity teams can only do their job effectively if they have a complete view and awareness of security incidents collected from systems, devices, and applications across the organization. Using a security information and event management program provides analysts with a centralized view of the data they need.

Image courtesy of Cloudfare

Microsoft’s Project Natick: The Underwater Data Center of the Future

When you think of underwater, deep-sea adventures, what is something that comes to mind? Colorful plants, odd looking sea creatures, and maybe even a shipwreck or two; but what about a data center? Moving forward, under-water datacenters may become the norm, and not so much an anomaly. Back in 2018, Microsoft sunk an entire data center to the bottom of the Scottish sea, plummeting 864 servers and 27.6 petabytes of storage. After two years of sitting 117 feet deep in the ocean, Microsoft’s Project Natick as it’s known, has been brought to the surface and deemed a success.

What is Project Natick?

 

Microsoft’s Project Natick was thought up back in 2015 when the idea of submerged servers could have a significant impact on lowering energy usage. When the original hypothesis came to light, Microsoft it immersed a data center off the coast of California for several months as a proof of concept to see if the computers would even endure the underwater journey. Ultimately, the experiment was envisioned to show that portable, flexible data center placements in coastal areas around the world could prove to scale up data center needs while keeping energy and operation costs low. Doing this would allow companies to utilize smaller data centers closer to where customers need them, instead of routing everything to centralized hubs. Next, the company will look into the possibilities of increasing the size and performance of these data centers by connecting more than one together to merge their resources.

What We Learned from Microsoft’s Undersea Experiment

After two years of being submerged, the results of the experiment not only showed that using offshore underwater data centers appears to work well in regards to overall performance, but also discovered that the servers contained within the data center proved to be up to eight times more reliable than their above ground equivalents. The team of researchers plan to further examine this phenomenon and exactly what was responsible for this greater reliability rate. For now, steady temperatures, no oxygen corrosion, and a lack of humans bumping into the computers is thought to be the reason. Hopefully, this same outcome can be transposed to land-based server farms for increased performance and efficiency across the board.

Additional developments consisted of being able to operate with more power efficiency, especially in regions where the grid on land is not considered reliable enough for sustained operation. It also will take lessons on renewability from the project’s successful deployment, with Natick relying on wind, solar, and experimental tidal technologies. As for future underwater servers, Microsoft acknowledged that the project is still in the infant stages. However, if it were to build a data center with the same capabilities as a standard Microsoft Azure it would require multiple vessels.

Do your data centers need servicing?

The Benefits of Submersible Data Centers

 

The benefits of using a natural cooling agent instead of energy to cool a data center is an obvious positive outcome from the experiment. When Microsoft hauled its underwater data center up from the bottom of the North Sea and conducted some analysis, researchers also found the servers were eight time more reliable than those on land.

The shipping container sized pod that was recently pulled from 117 feet below the North Sea off Scotland’s Orkney Islands was deployed in June 2018. Throughout the last two years, researchers observed the performance of 864 standard Microsoft data center servers installed on 12 racks inside the pod. During the experiment they also learned more about the economics of modular undersea data centers, which have the ability to be quickly set up offshore nearby population centers and need less resources for efficient operations and cooling. 

Natick researchers assume that the servers benefited from the pod’s nitrogen atmosphere, being less corrosive than oxygen. The non-existence of human interaction to disrupt components also likely added to increased reliability.

The North Sea-based project also exhibited the possibility of leveraging green technologies for data center operations. The data center was connected to the local electric grid, which is 100% supplied by wind, solar and experimental energy technologies. In the future, Microsoft plans to explore eliminating the grid connection altogether by co-locating a data center with an ocean-based green power system, such as offshore wind or tidal turbines.

Snowflake IPO

On September 16, 2020, history was made on the New York Stock Exchange. A software company named Snowflake (ticker: SNOW) made its IPO as the largest publicly traded software company, ever. As one of the most hotly anticipated listing in 2020, Snowflake began publicly trading at $120 per share and almost immediately jumped to $300 per share within a matter of minutes. With the never before seen hike in price, Snowflake also became the largest company to ever double in value on its first day of trading, ending with a value of almost $75 billion. 

What is Snowflake?

So, what exactly does Snowflake do? What is it that makes a billionaire investors like Warren Buffet and Marc Benioff jump all over a newly traded software company? It must be something special right? With all the speculation surrounding the IPO, it’s worth explaining what the company does. A simple explanation would be that Snowflake helps companies store their data in the cloud, rather than in on-site facilities. Traditionally, a company’s data is been stored on-premises on physical servers managed by that company. Tech giants like Oracle and IBM have led the industry for decades. Well, Snowflake is profoundly different. Instead of helping company’s store their data on-premises, Snowflake facilitates the warehousing of data in the cloud. But that’s not all. Snowflake has the capabilities of making the data queryable, meaning it simplifies the process for businesses looking to pull insights from the stored data. This is what sets Snowflake apart from the other data hoarding behemoths of the IT world. Snowflake discovered the secret to separating data storage from the act of computing the data. The best part is that they’ve done this before any of the other big players like Google, Amazon, or Microsoft. Snowflake is here to stay. 

Snowflake’s Leadership

Different than Silicon Valley’s tech unicorns of the past, Snowflake was started in 2012 by three data base engineers. Backed by venture capitalists and one VC firm that wishes to remain anonymous, Snowflake is currently led by software veteran, Frank Slootman. Before taking the reigns at Snowflake, Slootman had great success leading Data Domain and Service Now. He grew Data Domain from just a twenty-employee startup venture to over $1 billion in sales and a $2.4 billion acquisition sale to EMC. I think it’s safe to say that Snowflake is in the right hands, especially if it has any hopes of maturing into its valuation.

Snowflake’s Product Offering

We all know that Snowflake isn’t the only managed data warehouse in the industry. Both Amazon Web Service’s (AWS) Redshift and Google Cloud Platform’s (GCP) BigQuery are very common alternatives. So there had to be something that set Snowflake apart from the competition. It’s a combination of flexibility, service, and user interface. With a database like Snowflake, two pieces of infrastructure are driving the revenue model: storage and computing. Snowflake takes the responsibility of storing the data as well as ensuring the data queries run fast and smooth. The idea of splitting storage and computing in a data warehouse was unusual when Snowflake launched in 2012. Currently, there are query engines like Presto that solely exist just to run queries with no storage included. Snowflake offers the advantages of splitting storage and queries: stored data is located remotely on the cloud, saving local resources for the load of computing data. Moving storage to the cloud delivers lower cost, has higher availability, and provides greater scalability.  

 

Multiple Vendor Options

A majority of companies have adopted a multi-cloud as they prefer not to be tied down to a single cloud provider.  There’s a natural hesitancy to choose options like BigQuery that are subject to a single cloud like Google. Snowflake offers a different type of flexibility, operating on AWS, Azure, or GCP, satisfying the multi-cloud wishes of CIOs. With tech giants battling for domination of the cloud, Snowflake is in a sense the Switzerland of data warehousing. 

Learn more about a multi-cloud approach

Top of Form

Bottom of Form

 

Snowflake as a Service

When considering building a data warehouse, you need to take into account the management of the infrastructure itself. Even when farming out servers to a cloud provider, decisions like the right size storage, scaling to growth, and networking hardware come into play. Snowflake is a fully managed service. This means that users don’t need to worry about building any infrastructure at all. Just put your data into the system and query it. Simple as that. 

While fully managed services sound great, it comes at a cost. Snowflake users need to be deliberate about storing and querying their data as fully managed services are pricey. If deciding whether to build or buy your data warehouse, it would be wise to compare Snowflake ownership’s total cost to building something themselves.

 

Snowflake’s User Interface and SQL Functionality

Snowflake’s UI for querying and exploring tables is as easy on the eyes as it to use. Their SQL functionality is also a strong touching point. (Structured Query Language) is the programming language that developers and data scientists use to query their databases. Each database has slightly different details, wording, and structure. Snowflake’s SQL seems to have collected the best from all of the database languages and added other useful functions. 

 

A Battle Among Tech Giants

As the proverb goes, competition creates reason for caution. Snowflake is rubbing shoulders with some of the world’s largest companies, including Amazon, Google, and Microsoft. While Snowflake has benefited from an innovative market advantage, the Big Three are catching up quickly by creating comparable platforms.

However, Snowflake is dependent on these competitors for data storage. They’ve only has managed to thrive by acting as “Switzerland”, so customers don’t have to use just one cloud provider. As more competition enters the “multicloud” service industry, nonalignment can be an advantage, but not always be possible. Snowflake’s market share is vulnerable as there are no clear barriers to entry for the industry giants, given their technical talent and size. 

Snowflake is just an infant in the public eye and we will see if it sinks or swims over the next year or so. But with brilliant leadership, a promising market, and an extraordinary track record, Snowflake may be much more than a one hit wonder. Snowflake may be a once in a lifetime business.

HPE vs Dell: The Battle of the Servers

When looking at purchasing new servers for your organization, it can be a real dilemma deciding which to choose. With so many different brands offering so many different features, the current server industry may seem a bit saturated to some. Well this article does the hard work for you. We’ve narrowed down the list of server manufacturers to two key players: Dell and Hewlett Packard Enterprises (HPE). WE will help you with your next purchase decision by comparing qualities and features of each, such as: customer support, dependability, overall features, and cost. These are some of the major items to consider when investing in a new server. So, let’s begin.

Customer Support – Dell

The most beneficial thing regarding Dell customer support is that the company doesn’t require a paid support program to download any updates or firmware. Dell Prosupport is considered in the IT world as one of the more consistently reliable support programs in the industry. That being said, rumors have been circulating that Dell will soon be requiring a support contract for downloads in the future. 

You can find out more about Dell Prosupport here.

Customer Support – HPE

Unlike Dell, HPE currently requires businesses to have a support contract to download any new firmware or updates. It can be tough to find support drivers and firmware through HP’s platform even if you do have a contract in place. HPE’s website is a bit challenging to use in regard to finding information on support in general. On a brighter note, the support documentation provided is extremely thorough, and those with know-how can find manuals for essentially any thing you need. Though, by creating an online account through HPE‘s website one can gain access to HPE‘s 24/7 support, manage future orders, and the ability to utilize the HPE Operational Support Services experience. 

Customer Support Winner: Dell

Dependability – Dell

I’ll be the first to say that I’m not surprised whenever I hear about Dell servers running for years on end without any issues. Dell has always been very consistent as far as constantly improving their servers. Dell is the Toyota of the server world.

Dependability – HPE

Despite the reliability claims made for HPE’s superdome, apollo, and newer Proliant line of servers, HPE is known to have faults within the servers. In fact, a survey done mid-2017, HP Proliant’s had about 2.5x as much downtime as dell Poweredge servers. However, HPE does do a remarkable job with prognostic alerts for parts that are deemed to fail, giving businesses a n opportunity to repair or replace parts before they experience a down time.

Dependability Winner: Dell

Out of Band Management Systems

In regard to Out of Band Management systems, HPE’s system is known as Integrated Lights-Out (iLO), and Dell’s system is known as Integrated Dell Remote Access Controller (iDRAC). In the past there were some major differences between the two, but currently the IPMI implementations don’t differ enough to be a big determining factor. Both systems now provide similar features, such as HTML5 support. However, here are a few differences they do have.

Out of Band Management Systems – Dell

Dell’s iDRAC has progressed quite a bit in recent years. After iDRAC 7, java is no longer needed, yet the Graphic User Interface is not quite as nice as the one. iDRAC uses a physical license, which can be purchased on the secondary market and avoid being locked in again with the OEM after end of life. Updates are generally a bit longer with iDrac.

Out of Band Management Systems – HPE

HPE’s ILO advanced console requires a license, buy the standard console is included. Using the advanced console can ultimately lock you in with the OEM if your servers go to end of life. Unfortunately, they can’t be purchased on the secondary market. Although, it’s been noted that you only have to purchase one product key because the advanced key can be reused on multiple servers, this is against HPE’s terms of service. Generally, the GUI with ILO advanced appears more natural and the platform seems quicker.

Out of Band Management Systems Winner: HPE

Cost of Initial Investment- Dell

Price flexibility is almost nonexistent when negotiating with Dell, however with bigger, repeat customers Dell has been known to ease into more of a deal. In the past Dell was seen as being the more affordable option, but the initial cost of investment is nearly identical now. With Dell typically being less expensive, it tends to be the preference of enterprise professionals attempting to keep their costs low to increase revenue. Simply put, Dell is cheaper because it is so widely used, and everyone uses it because it’s more cost effective.

Cost of Initial Investment- HPE

HPE is generally more open to price negotiation, even though opening quotes are similar to Dell. Just like everything in business, your relationship with the vendor will be a much greater factor in determining price. Those that order in large quantities, more frequently, will usually have the upper hand in negotiations. That being said, HPE servers tend to be a little more expensive on average. When cost is not a factor, HPE leans to be the choice where long-term performance is the more important objective. HPE servers are supported globally through a number of channels. Due to the abundance of used HPE equipment in the market, replacement parts are fairly easy to come by. HPE also offer a more thorough documentation system, containing manuals for every little-known part HPE has ever made. HPE is enterprise class, whereas Dell is business class.

Cost of Initial Investment Winner: Tie

The Decisive Recap

When it really comes down to it, HPE and Dell are both very similar companies with comparable features. When assessing HPE vs Dell servers, there is no winner. There isn’t a major distinction between the companies as far as manufacturing quality, cost, or dependability. Those are factors that should be weighed on a case by case basis.

If you’re planning on replacing your existing hardware, sell your old equipment o us! We’d love to help you sell your used servers.

You can start by sending us a list of equipment you want sell. Not only do we buy used IT Equipment, we also offer the following services:

14 questions to ask before upgrading your servers

Servers are almost always used with specific objectives in mind. Regardless of whether the server is installed in a small business or large enterprise, the server’s role can change over time and sometimes start fulfilling other services and responsibilities. Therefor, it’s important to reviewing a server’s resource load to help ensure the organization improves performance and avoids downtime. 

What do you do when your servers are obsolete and ready to be retired? Unfortunately, server upgrades aren’t as easy as just dropping in more RAM, they require extensive planning. 

The server is essentially the backbone of a businesses’ IT functionality. Acquiring and installing a new server is a large undertaking for any business. Choosing the correct server is important to the value of an organization’s future.

So, what should you consider when it’s time to upgrade? To make matters a little easier, we’ve put together a list of 14 things to consider when upgrading your servers to ensure your organization’s systems perform at the highest levels.

Does it fit your needs?

First, let’s make sure that the new server is able to meet your organization’s IT needs. Determine the necessary requirements, compile this information, and work from there.

Is integration possible?

Check if you are able to integrate sections of your existing server into the new server. This could potentially save money on new technology and provide a level of consistency in terms of staff knowledge on the existing technology. Upgrading doesn’t mean that you need to throw your old equipment in the trash.

What are the costs?

Once you understand the performance requirements, the next step is to gauge which servers meet this most closely. Technology can be very expensive, so you shouldn’t pay for any technology that won’t be of use to your organization’s output.

What maintenance is involved?

Even the most current technology needs to be maintained and any length of downtime could be disastrous for an organization. Ensure that some form of maintenance cover is put in place. Usually, there is a warranty included, but it comes with an expiration date. Make sure you ask about extended warranty options if they’re available.

What about future upgrades?

Considering the future is critical when it comes to working with new technology. The fast pace at which technology develops means that you may need to consider growing your server a lot sooner than you expected. 

Do you have a data backup?

Never make any changes or upgrades to a server, no matter how minor, without having a data backup. When a server is powered down, there is no guarantee that it will come back online. 

Should you create an image backup?

Manufacturers tend to offer disk cloning technologies that streamline recovering servers should a failure occur. Some provide a universal restore option that allows you to recover a failed server. When upgrades don’t go as expected, disk images can help recover not only data but a server’s complex configuration.

How many changes are you making to your servers?

Don’t make multiple changes all at once. Adding disks, replacing memory, or installing additional cards should all be performed separately. If things go wrong a day or two after the changes are made, the process of isolating the change responsible for the error is much easier, than doing a myriad of changes all at once. If only a single change is executed, it’s much easier to track the source of the problem.

Are you monitoring your logs?

After a server upgrade is completed, never presume all is well just because the server booted back up without displaying errors. Monitor log files, error reports, backup operations, and other critical events. Leverage Windows’ internal performance reports to ensure all is performing as intended whenever changes or upgrades are completed.

Did you confirm the OS you are running?

It’s easy to forget the operating system a server is running. By performing a quick audit of the system to be upgraded, you can confirm the OS is compatible and will be able to use the additional resources being installed.

Does the chassis support the upgrade?

Server hardware can be notoriously inconsistent. Manufacturers often change model numbers and product designs. Whenever installing additional resources, you should read the manufacturer’s technical specifications before purchasing the upgrades.

Did you double check that it will work?

Whenever upgrading new server hardware, don’t automatically think the new hardware will plug-and-play well with the server’s operating system. Since the upgrade is being completed on a server, confirm the component is listed on the OS vendor’s hardware compatibility list. It doesn’t hurt to check the server manufacturer’s forums either.

Does the software need an update?

Make sure to keep up on any upgrades requiring software adjustments. You must also update a server’s virtual memory settings following a memory upgrade. 

Did you get the most value for your money?

Sure, less expensive disks, RAM, power supplies, and other resources are readily available. But when it comes to servers only high-quality components should be installed. While these items may cost a bit more than others, the performance and uptime benefits more than compensate for any additional expense.

Scroll to top