Data Backup

Data Center Downtime Disaster? Don’t Panic! Here’s Your Recovery Plan

In the bustling heart of your data center, where racks hum and information flows like an electrical current, the very thought of downtime sends shivers down your spine. But fear not, intrepid data center managers! While unplanned interruptions are like rogue thunderstorms in the digital landscape, preparation is the lightning rod that guides you through the turbulence. This blog is your blueprint for weathering the storm, a comprehensive guide to preventing and recovering from data center downtime disasters.

Prevention: Building a Fort Against the Digital Deluge

Before diving into recovery plans, let’s fortify your data center against potential threats. Think of it as building a robust dam upstream, minimizing the risk of a downstream flood.

1. The Pillars of Preparedness:

  • Identify Threats: Conduct a thorough risk assessment, mapping out potential vulnerabilities like power outages, hardware failures, natural disasters, cyberattacks, and human error.
  • Redundancy is Your Mantra: Implement hardware and software redundancy at every critical level. Dual power grids, mirrored servers, and redundant network connections create a safety net for essential operations.
  • Backup and Replication: Regular backups, both on-site and off-site, are your digital Noah’s Ark. Consider cloud-based solutions for geographically dispersed backup copies, ensuring data survives even regional disasters.
  • Disaster Recovery Testing: Don’t wait for the real storm to test your umbrella. Implement regular simulations of disaster scenarios, identifying and patching any leaks in your recovery plan.
  • Communication is Key: Establish clear communication channels for your internal team and external stakeholders. Ensure everyone knows their roles and responsibilities during a downtime event, minimizing confusion and facilitating a swift response.

2. Preventive Maintenance: Plugging the Leaks Before They Spring

Routine maintenance is like patching the cracks in your digital dam. Proactive measures proactively address potential issues:

  • Hardware and Software Maintenance: Implement comprehensive maintenance schedules for equipment, ensuring uptime and minimizing the risk of sudden failures.
  • Security Upgrades and Patching: Stay vigilant against cyber threats. Regularly update software and security patches to shield your data center from the latest vulnerabilities.
  • Environmental Controls: Temperature and humidity fluctuations can wreak havoc on equipment. Monitor and maintain optimal environmental conditions within your data center.

The Storm Hits: Rebooting From the Digital Flood

Despite your best efforts, even the most meticulously prepared data center can face downtime. When the storm cloud bursts, here’s your roadmap to navigate the deluge:

1. Rapid Response:

  • Activate Incident Response Protocol: Trigger your pre-defined communication channels, alerting your team and stakeholders of the outage.
  • Assess the Situation: Diagnose the source of the downtime and prioritize critical systems for immediate restoration.
  • Contain the Damage: Minimize data loss by isolating affected systems and initiating failover procedures to redundant backups.

2. Recovery in Motion:

  • Restore Critical Systems: Focus on bringing back core operations first, ensuring essential services resume as quickly as possible.
  • Data Recovery: Begin data restoration from backups, following your pre-established procedures to minimize lost information.
  • Communication and Transparency: Keep your team and stakeholders informed throughout the recovery process. Provide regular updates on progress and estimated timeframes for full restoration.

3. After the Storm: Learning from the Downpour

Once the data center hums back to life, it’s time for introspection. Use the downtime as a learning opportunity:

  • Debrief and Analyze: Conduct a thorough post-mortem analysis, identifying the root cause of the outage and any vulnerabilities exposed.
  • Update Your Plan: Refine your disaster recovery plan based on the lessons learned. Enhance procedures, address gaps, and strengthen your defenses against future storms.
  • Share Knowledge: Disseminate the learnings from the incident within your team and across the organization. Foster a culture of continuous improvement to build resilience against future disruptions.

A Final Note: Embracing the Unexpected

Data center downtime can be a nightmare, but with the right preparation and a well-honed recovery plan, it doesn’t have to be an existential crisis. By embracing a proactive approach and fostering a culture of preparedness, you can transform those storm clouds into an opportunity to strengthen your data center’s resilience and emerge even stronger. Remember, data center managers, it’s not about preventing the storm, it’s about weathering it with grace and efficiency.

This blog has been your compass through the turbulence. Now, go forth and build your data center’s ark – a digital fortress ready to weather any storm!

Bonus Tip: Don’t forget to document your disaster recovery plan clearly and concisely. Make it easily accessible to everyone involved, ensuring a smooth and coordinated response when the unexpected hits.

3 Things Star Wars Taught Us About Data Storage

In a galaxy far, far away, a farm boy on a desert planet joined an uprising to save a princess from a dark lord. This epic tale, known as Star Wars, has captivated audiences for over four decades and has become a cornerstone of global pop culture. But what if I told you that the Star Wars saga also holds valuable lessons in the realm of data storage, backup, and security? Indeed, George Lucas, the mastermind behind the franchise, was a data backup and cloud storage enthusiast. As we explore the Star Wars universe, we’ll uncover insights on data storage, data backup, and data security that can help you safeguard your organization’s critical information.

The Importance of Data Security in a Galaxy Far, Far Away

A robust data backup strategy begins with a strong data security approach. Data security is the first line of defense against potential data loss and can significantly reduce reliance on backups. Unfortunately, data security was often neglected in the Star Wars trilogy, resulting in data breaches and critical information being lost.

In the movies, the Jedi Archives, a repository of vital knowledge, were compromised when Obi-Wan attempted to access information about the planet Kamino. He discovered a blank space, indicating that the planet’s data had been deleted. Yoda’s explanation was that the planet’s data was likely removed from the archives. This serves as a lesson on the importance of maintaining strong passwords and permissions management.

In today’s data landscape, it’s essential to regularly review data security strategies, eliminate vulnerabilities, change passwords regularly, implement two-factor authentication, and always use encryption to safeguard your organization’s data from potential cyber threats.

The Power of Data Backup

Even when your data security is impeccable, unexpected disasters can occur, as demonstrated in the Star Wars universe. Inadequate security management on both sides led to the destruction of planets and super weapons. This highlights the importance of having a data backup plan in place.

The ideal approach to data backup is the 3-2-1 backup strategy, which involves having the data itself, a backup copy on-site (like an external hard drive), and a final copy stored in the cloud. The Star Wars universe primarily used data-tapes for their backup needs, showcasing the robustness and longevity of this technology.

In Star Wars, the blueprints for the Death Star were stored on Scarif, serving as the Empire’s cloud storage of sorts. The Death Star, like your organization, could benefit from additional copies of data in different geographic regions to mitigate the risk of data loss due to natural disasters. Tape storage, like data-tapes in the Star Wars universe, is an excellent choice for long-term data preservation.

The Significance of Version Control

Effective data backup solutions require regularity. Data backups must be performed consistently, sometimes even daily, depending on the situation and the importance of the data. The Star Wars saga underscores the need for up-to-date backups. The Empire’s failure to manage version control resulted in inaccurate information about the Death Star’s superlaser.

Version history is another crucial aspect of a backup strategy, allowing users to maintain multiple versions of a file over extended periods, potentially forever. Had the Empire employed version history, they could have reverted to earlier, more accurate plans to thwart the Rebel Alliance.

May the Data Be with You

Whether you manage a small business or a vast enterprise, your data is a critical asset that can mean the difference between success and failure. Just as in the Star Wars universe, data security and backup shouldn’t be a battle. Create a comprehensive plan that suits your organization, ensure your data is securely stored, and regularly verify that it’s up to date with the most recent versions. In the grand scheme of your data management journey, remember the iconic phrase, “May the Data Be with You.”

3-2-1 Backup Rule

The Essential Guide to Data Security and Backup: Deciphering the 3-2-1 Rule

In an increasingly digital world, where data is at the heart of every operation, safeguarding your information is paramount. Data security and backup strategies are vital for individuals and businesses alike. But how do you ensure your data is not only secure but also protected against unforeseen disasters? Enter the 3-2-1 backup rule, a time-tested concept that every data enthusiast should understand. In this comprehensive guide, we’ll delve into the intricacies of this rule and how it can fortify your data management strategy.

What is the 3-2-1 Backup Rule?

The 3-2-1 backup rule, popularized by renowned photographer Peter Krogh, stems from a profound understanding of the inevitability of data storage failures. Krogh’s wisdom distilled down to this simple yet effective rule: There are two kinds of people – those who have already experienced a storage failure and those who will face one in the future. It’s not a matter of if, but when.

The rule aims to address two pivotal questions:

  1. How many backup files should I have?
  2. Where should I store them?

The 3-2-1 backup rule, in essence, prescribes a structured approach to safeguarding your digital assets, and it goes as follows:

1. Have at least three copies of your data.

2. Store the copies on two different types of media.

3. Keep one backup copy offsite.

Let’s explore each element of this rule in detail.

Creating at Least Three Data Copies

Yes, three copies – that’s what the 3-2-1 rule mandates. In addition to your primary data, you should maintain at least two additional backups. But why the insistence on multiple copies? Consider this scenario: Your original data resides on storage device A, and its backup is on storage device B. If both devices are identical and don’t share common failure causes, and if device A has a 1/100 probability of failure (the same goes for device B), the likelihood of both devices failing simultaneously is reduced to 1/10,000.

Now, picture this: with three copies of data, you have your primary data (device A) and two backup copies (device B and device C). Assuming that all devices exhibit the same characteristics and have no common failure causes, the probability of all three devices failing at the same time decreases to a mere 1/1,000,000 chance of data loss. This multi-copy strategy drastically reduces the risk compared to having only one backup with a 1/100 chance of losing everything. Furthermore, having more than two copies of data ensures protection against a catastrophic event that affects the primary and its backup stored in the same location.

Storing Data on at Least Two Different Media Types

Here’s where the ‘2’ in the 3-2-1 rule plays a crucial role. It’s strongly recommended to maintain data copies on at least two different storage types. While devices within the same RAID setup may not be entirely independent, avoiding common failure causes is more feasible when data is stored on different media types.

For example, you could diversify your storage by having your data on internal hard disk drives and removable storage media, such as tapes, external hard drives, USB drives, or SD cards. Alternatively, you might opt for two internal hard disk drives located in separate storage locations. This diversification further fortifies your data against potential threats.

Storing at Least One Copy Offsite

Physical separation of data copies is critical. Keeping your backup storage device in the same vicinity as your primary storage device can be risky, as unforeseen events such as natural disasters, fires, or other emergencies could jeopardize both sets of data. It’s imperative to store at least one copy offsite, away from the primary location.

Many companies have learned this lesson the hard way, especially those situated in areas prone to natural disasters. A fire, flood, or tornado can quickly devastate on-site data. For smaller businesses with just one location, cloud storage emerges as a smart alternative, providing offsite security.

Additionally, companies of all sizes find tape storage at an offsite location to be a popular choice. Tapes offer a reliable, physical means of storing data securely.

In Conclusion:

The 3-2-1 backup rule is not merely a guideline; it’s a safeguard against data loss. As data becomes increasingly indispensable in our lives, understanding and implementing this rule is vital. Whether you’re an individual managing personal data or an IT professional responsible for a corporation’s information, the 3-2-1 rule can help you ensure the integrity, availability, and longevity of your digital assets.

Data security and backup are not optional but a necessity. By adhering to the 3-2-1 rule, you fortify your defenses, safeguard your data against unforeseen disasters, and ensure the continuity of your operations.

In our ever-evolving digital landscape, the 3-2-1 backup rule remains an unwavering beacon of data protection. Explore the options available to you, select the right storage media, and implement a strategy that aligns with this rule. Your data’s safety depends on it.

For more insights and information on expanding your data storage strategy, learn about purchasing tape media here.

Every system administrator should understand one thing – backup is king! Regardless of the system or platform you’re running, backup is the cornerstone of data security and resilience. Don’t wait until disaster strikes; fortify your data today, following the 3-2-1 backup rule. Your digital assets deserve nothing less.

BEST FREE AND PAID CLOUD STORAGE PROVIDERS IN 2022

Cloud storage and its benefits.

Cloud storage is like a virtual data center that is not operated by the company using it, instead, the cloud service provider provides the facilities of the data center from far away.

In cloud storage, the user’s data is copied multiple times and stored in several data centers so that the user can access data from another data center in case of a server failure. This way, the user can still access data if there is a power outage or any hardware failures, or any major natural disasters.

To use cloud services, the user will only need to pay for the storage area it wants to apply for and for the type of service without blocking any area in the company’s premises to have access to the storage area.

Cloud services are provided to the companies through a web surface interface and the user is not required to own huge systems.

Companies having access to the cloud services do not need to worry about maintaining the data center infrastructure on a regular device or allocate an amount of budget to have data center facilities.

Stealing data and information has always been in there in every industry. It can be prevented but not entirely. To prevent the loss of entire data, companies need to have trained, quick decision-making IT professionals. Cloud servers are well equipped with IT professionals since it’s their primary service provided to their customers. Apart from that, it is not possible for individual companies to keep their update their IT infrastructure updated.

Using cloud services will reduce capital expenses for the company.

If every company uses their own IT infrastructure for data, then there will be a high consumption of energy whereas a single cloud service provider can cater data center facilities to so many companies at a time keeping the energy consumption low.

In order to use more storage companies, a company needs to contact their cloud service provider to change their subscription plan to gain more access to the storage area with just an increase in the rate to acquire the subscription.

Difference between free cloud storage and paid cloud storage.

In free cloud storage services, some famous paid cloud service providers provide cloud services for free. To upload the data, the user needs to have an internet connection.

Having free cloud storage, allows the user to have their data in backup which will be protected and can be accessed through many devices.

This is beneficial for those with less storage capacity in their devices and who don’t want to invest in a storage medium. Free cloud storage will help the user to store some of their, media and files in the cloud and create free storage space in their device. The media and file will still be accessible to the user by just using an internet service. This way, the user will be able to store important media and files and prevent having them accidentally deleted.

In free cloud service, data can be accessed from anywhere without the need to pay any charge, just sign up to the company’s account.

Although while signing up for free cloud services has the disadvantage of having access to a limited data storage space and to have more storage space, the user will need to pay for more space. 

A common thing about paid and free cloud services is that to have a better in any one of them, the user will need to purchase advanced service from the cloud service provider.

In paid cloud storage services, the providers allow the user to have more storage area and also more security, the user will be able to backup media and files from more than one device. 

Following are some cloud storage providers:

Google Cloud Free Program –

The user will get the following options:

90-day, $300 Free Trial – under this, new google cloud users or Google Maps Platforms users can avail free Google cloud and Google Maps Platform services for 90 days for trial along with $300 in free Cloud Billing Credits.

All Google Cloud users can avail free Google Cloud products like Compute Engine, Cloud Storage, and BigQuery within the specified monthly usage limits, described by Google.

To store Google Maps, Google Maps Platform provides a recurring $200 monthly credit applying towards each Maps-related Cloud Billing account created by the user.

Google One – the storage will be shared through Google Drive, Gmail, and Google Photos.

Firstly, Google One allows its users to have access to 15 GB free. After which to have more storage space, the user will need to pay for

BASIC: $1.99 per month or $19.99 annually for 100 GB, which further includes access to google experts, an option to add family members of the user, and have benefits for an extra member but the family members are also required to live in the same country.

STANDARD: $2.99 per month or $29.99 annually for 200 GB.

This includes the same benefits as the BASIC package.

PREMIUM: $9.99 per month or $99.99 annually for 2 TB.

This includes the same benefits as the BASIC package and gives the user a VPN to use on their Android devices.

Amazon Web Services – it provides 160 cloud services. Under Free Tier services, after signing up for an account, the user avail services based upon their needs. There are some services, under Free Tier, which are free for 12 months, some services are always free whereas some have a trial period after which the user needs to purchase a membership to continue availing the services.

Microsoft Azure Free Account – when a user signs up for Azure with a free account, they get USD 200 credit for the first 30 days. It also includes two groups of services including popular services free for 12 months and other 25 services which will be always free.

Microsoft Azure has Pricing Calculator which allows potential service buyer to calculate their pricing estimates based on their existing workloads.

OneDrive: based on the potential user’s preferences, the buyer can opt for a package for home or for business.

In Microsoft 365 for family, the buyer can have a trial for a month ranging from 1 person to 6 people. In Microsoft 365 Business, the number of users depends on the type of plan.

IBM Cloud – IBM also provides storage options with always-free options or options with free trials after which the user needs to apply for services allotting a credit amount before starting the trial period.

iCloud storage: when a person signs up, the user is automatically enrolled for free 5GB of storage to keep media and files.

After using all of the 5GB storage space in iCloud, the user can upgrade to iCloud+ and also allows the user to share iCloud with their family.

Oracle: it provides a free time-limited trial to help the user explore services provided by Oracle Cloud Infrastructure products along with a few lifetime free services. The potential user will have $300 worth of cloud credits valid for 30 days.

DropBox: the free plan in this is suitable for those minimal requirements of storage since with free access, DropBox provides 2 GB of space and along with other benefits, if a user accidentally deletes a file from DropBox, the file can be restored from DropBox within 30 days.

To have more storage space with DropBox, the user can upgrade to paid plans.

Both are safe options to store personal, media, and files but a paid cloud membership is suitable for businesses required to protect more sensitive files than an individual person. That is why, free cloud servers and advised for personal use.

HOW IS BLOCKCHAIN DISRUPTING THE CLOUD STORAGE INDUSTRY?

What is blockchain and why people are using it?

It is a distributed database shared through nodes of a computer network. Blockchain helps to store the information electronically in a digital format. Blockchain is known for being used in cryptocurrency systems, such as Bitcoin. It helps in creating a secure and decentralized record of transactions.

Blockchain claims to guarantee the fidelity and security of the recorded data and trust without involving a trusted third party.

In the blockchain, data is stored in sets known as blocks holding sets of information. These blocks have a fixed amount of storage capacity and are closely linked with the previous block, therefore, forming a blockchain. When new information needs to be recorded, a new block is formed and after the information has been recorded, the block gets added to the chain.

Traditionally in databases, data are recorded in tables whereas, in blockchain, databases are formed into blocks. Each block creates a timestamp in the data structure. When a block is added to the chain, as a result, it creates a fixed timeline of data result, data structure creates an irreversible timeline of data which becomes fixed in the timeline. 

Blockchain is preferred due to various reasons. 

Blockchain is used in transactional fields, being approved by thousands of computers. This helps in eliminating human involvement. Blockchain doesn’t require to have the verification process done by a human. Even if a mistake, due to being separate blocks, the error will not spread out.

Just like eliminating the need for humans to verify, similarly, blockchain removes any need for a trusted third-party verification and thus eliminates the cost that comes with it. When doing the payment, payment processing companies incur a charge but blockchain helps in eliminating them as well.

Information stored in the blockchain is not located in its central location. Information is spread throughout various computers. This step reduces the chances of losing data since if a copy of the blockchain is breached then only a single copy of the information will be with the cyber attackers and the whole network will not be compromised.

Blockchain provides quick deposits all day and every day. This is helpful if money needs to be transferred or deposited to a bank in different time zones. 

Blockchain networks are confidential networks and not just anonymous. When transactions are made using blockchain, a person with the internet can view the list of transaction history but the person will not be able to access any information about the use nor can the user be identified. 

To store in blockchain about the transaction, a unique key or a public key is added to the blockchain on behalf of the transaction detail recorded in the blockchain.

After the transactions are recorded, they need to get verified by the blockchain network. When information is verified by the blockchain network then the information gets added to the blockchain. 

Most blockchain is entirely open-source software. This means it can be accessed by anyone and can be viewed by anyone which enables to review of cryptocurrencies. Thus, there is no hidden information about who controls Bitcoin or how is it edited. Hence, anybody can suggest new changes, and later on, if companies accept the idea, then the idea will be updated.

Several types of industries have started adopting blockchain in their companies. 

What is cloud storage and why do people use it?

Cloud storage help businesses and consumers to have a secure online place to store data. Having data online allows the user to access the data from any location and also the data can be shared with those who have the authorization to access it. Cloud storage also helps to back up data so that data can be recovered even from an off-site location.

Having access to cloud services allows the user to have upgraded subscription packages which will allow the user to have access to large storage sizes and additional cloud services.

Using cloud storage helps businesses to eliminate the need to buy data storage infrastructure which will help the company to have more space on the premises. Having cloud infrastructure eliminates the requirement to maintain the cloud infrastructure in the premises since cloud infrastructure will be maintained by the cloud service provider. The cloud servers will help companies to up their storage capacity whenever required just by changing the subscription plan. 

Cloud enables its users to collaborate with their colleagues which means that the users can work remotely and even after or before business hours. This is because users can access files anytime if they are authorized to. Cloud servers can be accessed with mobile data as well therefore using cloud storage to store files will bring a positive effect on the environment since there will be less consumption of energy.

Therefore, by eliminating the need to have employees for the on-premises data center, the company can employ for the tasks which have higher priorities.

Cloud computing provides various services such as 

  • Infrastructure as a Service,
  • Platform as a Service,
  • Software as a Service.

Difference between blockchain and cloud storage?

Where data can be accessed through the cloud anytime, in blockchain, it uses different styles of encryption along with hash to store data in protected databases. 

In the cloud, storage data are mutable whereas, in blockchain technology, data are not mutable. 

Cloud storage provides services in three formats and in blockchain it eliminates the need to use a trusted third party.

Cloud computing is centralized which means that all the data are stored in the company’s centralized set of data centers where blockchain follows decentralization.

A user can choose their data to be either public or private or a combination of both but in blockchain, its main feature is providing transparency of data.

Cloud computing follows the traditional method of database structure data stored will reside in the machines involving participants. Whereas, blockchain claims to be incorruptible where online data registry is reliable with various transactions. This states that participants using blockchain technology can alter the data by taking necessary approval from each party involved in the transaction.

Following are the companies which provide cloud computing services:

Google, IBM, Microsoft, Amazon Web Services, and Alibaba Cloud.

Following are the projects which use blockchain technology:

Ethereum, Bitcoin, Hyperledger Fabric, and Quorum.

How is blockchain disrupting the cloud storage industry?

Mainly why blockchain is moving ahead with progress and is getting more preference is due to the fact that it is more secure due to the elimination of trusted third parties. Also keeping the data in a decentralized manner also makes the blockchain technology more secure. Not to forget that data gets secured in a block thus, cyber attackers can’t access the whole chain of data since they are separated and need different unique keys. Therefore, blockchain is less vulnerable to attackers and there is reduced systematic damage and widespread data loss. 

Also, it is next to impossible if someone wants to alter the data since the transactions are governed by a code and it is not controlled by a third party. 

Many companies have jumped to providing blockchain services along with their cloud services. That is because providing blockchain services cost less expensive as many small organizations collaborate and provide the shared computing power and space to store data. 

Following are some companies that are using blockchain technology, as per 101Blockchains:

Unilever, Ford, FDA, DHL, AIA Group, MetLife, American International Group, etc.

Salesforce has launched Salesforce Blockchain which is built on CRM software. 

Storj provides blockchain technology services enabled with cloud storage networks which help in facilitating better security and lowering the cost of transactions for storing information in the cloud.

Microsoft’s Project Natick: The Underwater Data Center of the Future

When you think of underwater, deep-sea adventures, what is something that comes to mind? Colorful plants, odd looking sea creatures, and maybe even a shipwreck or two; but what about a data center? Moving forward, under-water datacenters may become the norm, and not so much an anomaly. Back in 2018, Microsoft sunk an entire data center to the bottom of the Scottish sea, plummeting 864 servers and 27.6 petabytes of storage. After two years of sitting 117 feet deep in the ocean, Microsoft’s Project Natick as it’s known, has been brought to the surface and deemed a success.

What is Project Natick?

 

Microsoft’s Project Natick was thought up back in 2015 when the idea of submerged servers could have a significant impact on lowering energy usage. When the original hypothesis came to light, Microsoft it immersed a data center off the coast of California for several months as a proof of concept to see if the computers would even endure the underwater journey. Ultimately, the experiment was envisioned to show that portable, flexible data center placements in coastal areas around the world could prove to scale up data center needs while keeping energy and operation costs low. Doing this would allow companies to utilize smaller data centers closer to where customers need them, instead of routing everything to centralized hubs. Next, the company will look into the possibilities of increasing the size and performance of these data centers by connecting more than one together to merge their resources.

What We Learned from Microsoft’s Undersea Experiment

After two years of being submerged, the results of the experiment not only showed that using offshore underwater data centers appears to work well in regards to overall performance, but also discovered that the servers contained within the data center proved to be up to eight times more reliable than their above ground equivalents. The team of researchers plan to further examine this phenomenon and exactly what was responsible for this greater reliability rate. For now, steady temperatures, no oxygen corrosion, and a lack of humans bumping into the computers is thought to be the reason. Hopefully, this same outcome can be transposed to land-based server farms for increased performance and efficiency across the board.

Additional developments consisted of being able to operate with more power efficiency, especially in regions where the grid on land is not considered reliable enough for sustained operation. It also will take lessons on renewability from the project’s successful deployment, with Natick relying on wind, solar, and experimental tidal technologies. As for future underwater servers, Microsoft acknowledged that the project is still in the infant stages. However, if it were to build a data center with the same capabilities as a standard Microsoft Azure it would require multiple vessels.

Do your data centers need servicing?

The Benefits of Submersible Data Centers

 

The benefits of using a natural cooling agent instead of energy to cool a data center is an obvious positive outcome from the experiment. When Microsoft hauled its underwater data center up from the bottom of the North Sea and conducted some analysis, researchers also found the servers were eight time more reliable than those on land.

The shipping container sized pod that was recently pulled from 117 feet below the North Sea off Scotland’s Orkney Islands was deployed in June 2018. Throughout the last two years, researchers observed the performance of 864 standard Microsoft data center servers installed on 12 racks inside the pod. During the experiment they also learned more about the economics of modular undersea data centers, which have the ability to be quickly set up offshore nearby population centers and need less resources for efficient operations and cooling. 

Natick researchers assume that the servers benefited from the pod’s nitrogen atmosphere, being less corrosive than oxygen. The non-existence of human interaction to disrupt components also likely added to increased reliability.

The North Sea-based project also exhibited the possibility of leveraging green technologies for data center operations. The data center was connected to the local electric grid, which is 100% supplied by wind, solar and experimental energy technologies. In the future, Microsoft plans to explore eliminating the grid connection altogether by co-locating a data center with an ocean-based green power system, such as offshore wind or tidal turbines.

Cyber Insurance in the Modern World

Yes, you read that correctly, cyber insurance is a real thing and it does exactly what is says. No, cyber insurance can’t defend your business from a cyber-attack, but it can keep your business afloat with secure financial support should a data security incident happen. Most organizations operate their business and reach out to potential customers via social media and internet-based transactions. Unfortunately, those modes of communication also serve as opportunities to cyber warfare. The odds are not in your favor, as cyberattacks are likely to occur and have the potential to cause serious losses for organizations both large and small. As part of a risk management plan, organizations regularly must decide which risks to avoid, accept, control or transfer. Transferring risk is where cyber insurance will pay massive dividends.

 

What is Cyber Insurance?

By definition, a cyber insurance policy, also known as cyber risk insurance (CRI) or cyber liability insurance coverage (CLIC), is meant to help an organization alleviate the risk of a cyber-related security breach by offsetting the costs involved with the recovery. Cyber insurance started making waves in 2005, with the total value of premiums projected to reach $7.5 billion by 2020. According to audit and assurance consultants PwC, about 33% of U.S. companies currently hold a cyber insurance policy. Clearly companies are feeling the need for cyber insurance, but what exactly does it cover? Dependent on the policy, cyber insurance covers expenses related to the policy holder as well as any claims made by third party casualties. 

Below are some common reimbursable expenses:

  • Forensic Investigation: A forensics investigation is needed to establish what occurred, the best way to repair damage caused and how to prevent a similar security breach from happening again. This may include coordination with law enforcement and the FBI.
  • Any Business Losses Incurred: A typical policy may contain similar items that are covered by an errors & omissions policy, as well as financial losses experienced by network downtime, business disruption, data loss recovery, and reputation repair.
  • Privacy and Notification Services: This involves mandatory data breach notifications to customers and involved parties, and credit monitoring for customers whose information was or may have been violated.
  • Lawsuits and Extortion Coverage: This includes legal expenses related to the release of confidential information and intellectual property, legal settlements, and regulatory fines. This may also include the costs associated from a ransomware extortion.

Like anything in the IT world, cyber insurance is continuously changing and growing. Cyber risks change often, and organizations have a tendency to avoid reporting the true effect of security breaches in order to prevent negative publicity. Because of this, policy underwriters have limited data on which to define the financial impact of attacks.

How do cyber insurance underwriters determine your coverage?

 

As any insurance company does, cyber insurance underwriters want to see that an organization has taken upon itself to assess its weaknesses to cyberattacks. This cyber risk profile should also show how the company and follows best practices by facilitating defenses and controls to protect against potential attacks. Employee education in the form of security awareness, especially for phishing and social engineering, should also be part of the organization’s security protection plan. 

Cyber-attacks against all enterprises have been increasing over the years. Small businesses tend to take on the mindset that they’re too small to be worth the effort of an attack. Quite the contrary though, as Symantec found that over 30% of phishing attacks in 2015 were launched against businesses with under 250 employees. Symantec’s 2016 Internet Security Threat Report indicated that 43% of all attacks in 2015 were targeted at small businesses.

You can download the Symantec’s 2016 Internet Security Threat Report here

The Centre for Strategic and International Studies estimates that the annual costs to the global economy from cybercrime was between $375 billion and $575 billion, with the average cost of a data breach costing larger companies over $3 million per incident. Every organization is different and therefore must decide whether they’re willing to risk that amount of money, or if cyber insurance is necessary to cover the costs for what they potentially could sustain.

As stated earlier in the article, cyber insurance covers first-party losses and third-party claims, whereas general liability insurance only covers property damage. Sony is a great example of when cyber insurance comes in handy. Sony was caught in the 2011 PlayStation hacker breach, with costs reaching $171M. Those costs could have been offset by cyber insurance had the company made certain that it was covered prior.

The cost of cyber insurance coverage and premiums are based on an organization’s industry, type of service they provided, they’re probability of data risks and exposures, policies, and annual gross revenue. Every business is very different so it best to consult with your policy provider when seeking more information about cyber-insurance.

HPE vs Dell: The Battle of the Servers

When looking at purchasing new servers for your organization, it can be a real dilemma deciding which to choose. With so many different brands offering so many different features, the current server industry may seem a bit saturated to some. Well this article does the hard work for you. We’ve narrowed down the list of server manufacturers to two key players: Dell and Hewlett Packard Enterprises (HPE). WE will help you with your next purchase decision by comparing qualities and features of each, such as: customer support, dependability, overall features, and cost. These are some of the major items to consider when investing in a new server. So, let’s begin.

Customer Support – Dell

The most beneficial thing regarding Dell customer support is that the company doesn’t require a paid support program to download any updates or firmware. Dell Prosupport is considered in the IT world as one of the more consistently reliable support programs in the industry. That being said, rumors have been circulating that Dell will soon be requiring a support contract for downloads in the future. 

You can find out more about Dell Prosupport here.

Customer Support – HPE

Unlike Dell, HPE currently requires businesses to have a support contract to download any new firmware or updates. It can be tough to find support drivers and firmware through HP’s platform even if you do have a contract in place. HPE’s website is a bit challenging to use in regard to finding information on support in general. On a brighter note, the support documentation provided is extremely thorough, and those with know-how can find manuals for essentially any thing you need. Though, by creating an online account through HPE‘s website one can gain access to HPE‘s 24/7 support, manage future orders, and the ability to utilize the HPE Operational Support Services experience. 

Customer Support Winner: Dell

Dependability – Dell

I’ll be the first to say that I’m not surprised whenever I hear about Dell servers running for years on end without any issues. Dell has always been very consistent as far as constantly improving their servers. Dell is the Toyota of the server world.

Dependability – HPE

Despite the reliability claims made for HPE’s superdome, apollo, and newer Proliant line of servers, HPE is known to have faults within the servers. In fact, a survey done mid-2017, HP Proliant’s had about 2.5x as much downtime as dell Poweredge servers. However, HPE does do a remarkable job with prognostic alerts for parts that are deemed to fail, giving businesses a n opportunity to repair or replace parts before they experience a down time.

Dependability Winner: Dell

Out of Band Management Systems

In regard to Out of Band Management systems, HPE’s system is known as Integrated Lights-Out (iLO), and Dell’s system is known as Integrated Dell Remote Access Controller (iDRAC). In the past there were some major differences between the two, but currently the IPMI implementations don’t differ enough to be a big determining factor. Both systems now provide similar features, such as HTML5 support. However, here are a few differences they do have.

Out of Band Management Systems – Dell

Dell’s iDRAC has progressed quite a bit in recent years. After iDRAC 7, java is no longer needed, yet the Graphic User Interface is not quite as nice as the one. iDRAC uses a physical license, which can be purchased on the secondary market and avoid being locked in again with the OEM after end of life. Updates are generally a bit longer with iDrac.

Out of Band Management Systems – HPE

HPE’s ILO advanced console requires a license, buy the standard console is included. Using the advanced console can ultimately lock you in with the OEM if your servers go to end of life. Unfortunately, they can’t be purchased on the secondary market. Although, it’s been noted that you only have to purchase one product key because the advanced key can be reused on multiple servers, this is against HPE’s terms of service. Generally, the GUI with ILO advanced appears more natural and the platform seems quicker.

Out of Band Management Systems Winner: HPE

Cost of Initial Investment- Dell

Price flexibility is almost nonexistent when negotiating with Dell, however with bigger, repeat customers Dell has been known to ease into more of a deal. In the past Dell was seen as being the more affordable option, but the initial cost of investment is nearly identical now. With Dell typically being less expensive, it tends to be the preference of enterprise professionals attempting to keep their costs low to increase revenue. Simply put, Dell is cheaper because it is so widely used, and everyone uses it because it’s more cost effective.

Cost of Initial Investment- HPE

HPE is generally more open to price negotiation, even though opening quotes are similar to Dell. Just like everything in business, your relationship with the vendor will be a much greater factor in determining price. Those that order in large quantities, more frequently, will usually have the upper hand in negotiations. That being said, HPE servers tend to be a little more expensive on average. When cost is not a factor, HPE leans to be the choice where long-term performance is the more important objective. HPE servers are supported globally through a number of channels. Due to the abundance of used HPE equipment in the market, replacement parts are fairly easy to come by. HPE also offer a more thorough documentation system, containing manuals for every little-known part HPE has ever made. HPE is enterprise class, whereas Dell is business class.

Cost of Initial Investment Winner: Tie

The Decisive Recap

When it really comes down to it, HPE and Dell are both very similar companies with comparable features. When assessing HPE vs Dell servers, there is no winner. There isn’t a major distinction between the companies as far as manufacturing quality, cost, or dependability. Those are factors that should be weighed on a case by case basis.

If you’re planning on replacing your existing hardware, sell your old equipment o us! We’d love to help you sell your used servers.

You can start by sending us a list of equipment you want sell. Not only do we buy used IT Equipment, we also offer the following services:

The Role of Cryptocurrencies in the Age of Ransomware

Now more than ever, there has become an obvious connection between the rising ransomware era and the cryptocurrency boom. Believe it or not, cryptocurrency and ransomware have an extensive history with one another. They are so closely linked, that many have attributed the rise of cryptocurrency with a corresponding rise in ransomware attacks across the globe. There is no debating the fact that ransomware attacks are escalating at an alarming rate, but there is no solid evidence showing a direct correlation to cryptocurrency. Even though the majority of ransoms are paid in crypto, the transparency of the currency’s block chain makes it a terrible place to keep stolen money.

The link between cryptocurrency and ransomware attacks

There are two keyways that ransomware attacks rely on the cryptocurrency market. First, the majority of the ransoms paid during these attacks are usually in cryptocurrency. A perfect example is with the largest ransomware attack in history, the WannaCry ransomware attacks. Attackers demanded their victims to pay nearly $300 of Bitcoin (BTC) to release their captive data..

A second way that cryptocurrencies and ransomware attacks are linked is through what is called “ransomware as a service”. Plenty of cyber criminals offer “ransomware as a service,” essentially letting anyone hire a hacker via online marketplaces. How do you think they want payment for their services? Cryptocurrency.

Read more about the WannaCry ransomware attacks here

Show Me the Money

From an outsider’s perspective, it seems clear why hackers would require ransom payments in cryptocurrency. The cryptocurrency’s blockchain is based on privacy and encryption, offering the best alternative to hide stolen money. Well, think again. There is actually a different reason why ransomware attacks make use of cryptocurrencies. The efficiency of cryptocurrency block chain networks, rather than its concealment, is what really draws the cyber criminals in.

The value of cryptocurrency during a cyber-attack is really the transparency of crypto exchanges. A ransomware attacker can keep an eye on the public blockchain to see if his victims have paid their ransom and can automate the procedures needed to give their victim the stolen data back. 

On the other hand, the cryptocurrency market is possibly the worst place to keep the stolen funds. The transparent quality of the cryptocurrency blockchain means that the world can closely monitor the transactions of ransom money. This makes it tricky to switch the stolen funds into an alternative currency, where they can be tracked by law enforcement.

Read about the recent CSU college system ransomware attack here

Law and Order

Now just because the paid ransom for stolen data can be tracked in the blockchain doesn’t automatically mean that the hackers who committed the crime can be caught too. Due to the anonymity of cryptocurrency it is nearly impossible for law enforcement agencies to find the true identity of cybercriminals, However, there are always exceptions to the rule. 

Blockchain allows a transaction to be traced relating to a given bitcoin address, all the way back to its original transaction. This permits law enforcement access to the financial records required to trace the ransom payment, in a way that would never be possible with cash transactions.

Due to several recent and prominent ransomware attacks, authorities have called for the cryptocurrency market to be watched more closely. In order to do so, supervision will need to be executed in a very careful manner, not to deter from the attractiveness of anonymity of the currency. 

Protect Yourself Anyway You Can

The shortage of legislative control of the cryptocurrency market, mixed with the quick rise in ransomware attacks, indicates that individuals need to take it upon themselves to protect their data. Some organizations have taken extraordinary approaches such as hoarding Bitcoin in case they need to pay a ransom as part of a future attack. 

For the common man, protecting against ransomware attacks means covering your bases. You should double check that all of your cyber security software is up to date, subscribe to a secure cloud storage provider and backup your data regularly. Companies of all sizes should implement the 3-2-1 data backup strategy in the case of a ransomware attack. The 3-2-1 backup plan states that one should have at least three different copies of data, stored on at least 2 different types of media, with at least one copy offsite. It helps to also have a separate copy of your data stored via the air-gap method, preventing it from ever being stolen.

Learn More About Getting Your 3-2-1 Backup Plan in Place

TapeChat with Pat

At DTC, we value great relationships. Luckily for us, we have some of the best industry contacts out there when it comes to tape media storage & backup. Patrick Mayock, a Partner Development Manager at Hewlett Packard Enterprise (HPE) is one of those individuals. Pat has been with HPE for the last 7 years and prior to that has been in the data backup / storage industry for the last 30 years. Pat is our go to guy at HPE, a true source of support, and overall great colleague. For our TapeChat series Pat was our top choice. Pat’s resume is an extensive one that would impress anyone who see’s it. Pat started his data / media storage journey back in the early 90’s in the bay area. Fast forward to today Pat can be found in the greater Denver area with the great minds over at HPE. Pat knows his stuff so sit back and enjoy this little Q&A we setup for you guys. We hope you enjoy and without further adieu, we welcome you to our series, TapeChat (with Pat)!

Pat, thank you for taking the time to join us digitally for this online Q&A. We would like to start off by stating how thrilled we are to have you with us. You’re an industry veteran and we’re honored to have you involved in our online content.

Thanks for the invite.  I enjoy working with your crew and am always impressed by your innovative strategies to reach out to new prospects and educate existing customers on the growing role of LTO tape from SMB to the Data Center. 

Let’s jump right into it! For the sake of starting things out on a fun note, what is the craziest story or experience you have had or know of involving the LTO / Tape industry? Maybe a fun fact that most are unaware of, or something you would typically tell friends and family… Anything that stands out…

I’ve worked with a few tape library companies over the years and before that I sold the original 9 track ½ inch tape drives.  Those were monsters, but you would laugh how little data they stored on a reel of tape. One of the most memorable projects I worked on was in the Bay Area, at Oracle headquarters.  They had the idea to migrate from reel to reel tape drives with a plan to replace them with compact, rack mounted, ‘robotic’ tape libraries.  At the end, they replaced those library type shelves, storing hundreds of reels of tape with 32 tape libraries in their computer cabinets.  Each tape library had room for 40 tape slots and four 5 ¼ full high tape drives.  The contrast was impressive.  To restore data, they went from IT staffers physically moving tape media, in ‘sneaker mode’ to having software locate where the data was stored, grab and load the tape automatically in the tape library and start reading data.   Ok, maybe too much of a tape story, but as a young sales rep at the time it was one that I’ll never forget. 

With someone like yourself who has been doing this for such a long time, what industry advancements and releases still get you excited to this day? What is Pat looking forward to right now in the LTO Tape world?

I’m lucky.  We used to have five or more tape technologies all fighting for their place in the data protection equation, each from a different vendor. Now, Ultrium LTO tape has a majority of the market and is supported by a coalition of multiple technology vendors working together to advance the design. Some work in the physical tape media, some on the read/write heads, and some on the tape drive itself.  The business has become more predictable and more reliable.  About every two years the consortium releases the next level of LTO tape technology.  We will see LTO-9 technology begin public announcements by the end of 2020. And the thirst for higher storage capacity and higher performance in the same physical space, this is what keeps me more than optimistic about the future.

When our sales team is making calls and asks a business if they are still backing up to LTO Tape, that question is always met with such an unappreciated / outdated response, in some cases we receive a response of laughter with something along the lines of “people still use tape” as a response. Why do you think LTO as a backup option is getting this type of response? What is it specifically about the technology that makes businesses feel as if LTO Tape is a way of the past…

As a Tape Guy, I hear that question a lot.  The reality in the market is that some industries are generating so much data that they have to increase their dependence on tape based solutions as part of their storage hierarchy. It starts with just the cost comparison of data on a single disk drive versus that same amount of data on a LTO tape cartridge. LTO tape wins. But the real impact is some much bigger than just that.  Think about the really large data center facilities.  The bigger considerations are for instance, for a given amount of data (a lot) what solution can fit the most data in to a cabinet size solution.  Physical floor space in the data center is at a premium.  Tape wins. Then consider the cost of having that data accessible.  A rack of disk drives consume tons more energy that a tape library. Tape wins again. Then consider the cooling cost that go along with all those disk drives spinning platters.  Tape wins, creating a greener solution that is more cost effective. At HPE and available from DTC, we have white papers and presentations on just this topic of cost savings.   In summary, if a company is not looking at or using LTO tape, then their data retention, data protection and data archiving needs are just not yet at the breaking point. 

There seems to be an emergence of the Disk / Hard Drive backup option being utilized by so many businesses. Do you feel like LTO Tape will ever be looked at with the same level of respect or appreciation by those same businesses?

If you are talking about solid state disk for high access, and dedicated disk drive solutions for backup – sure that works.  But at some point you need multiple copies at multiple locations to protect your investment.  The downside of most disk only solutions is that all the data is accessible across the network.  Now days, Ransomware and CyberSecurity are part of the biggest threats to corporations, government agencies and even mom and pop SMBs.  The unique advantage of adding LTO tape based tape libraries is that the data is NOT easily tapped into because the physical media in not in the tape drive.  Again, HPE has very detailed white papers and presentations on this Air Gap principle, all available from DTC. 

LTO Tape vs Hard Drive seems to be the big two in terms of the data / backup realm, as an insider to this topic, where do you see this battle going in the far future?

It’s less of a battle and more of a plan to ‘divide the work load and let’s work together’.  In most environments, tape and disk work side by side with applications selecting where the data is kept. However, there are physical limitations on how much space is available on a spinning platter or set of platters, and this will dramatically slow down the growth of their capacity within a given form factor. With LTO tape technology, the physical areal footprint is so much bigger, because of the thousands of feet of tape within each tape cartridge. At LTO-8 we have 960 meters of tape to write on. Even at a half inch wide, that’s a lot of space for data. Both disk and tape technologies will improve how much data they can fit on their media, (areal density) but LTO tape just has the advantage of so much space to work with. LTO tape will continue to follow the future roadmap which is already spec’d out to LTO-12.  

With so many years in this industry, what has been the highlight of your career?

The technology has always impressed me, learning and talking about the details of a particular technical design advantage. Then, being able to work with a wide range of IT specialists and learning about their business and what they actually do with the data.  But when I look back, on the biggest highlights,  I remember all the great people that I have worked with side by side to solve customer’s storage and data protection problems.  Sometimes we won, sometimes we didn’t.  I will never forget working to do our best for the deal. 

What tech advancements do you hope to see rolled out that would be a game changer for data storage as a whole?

The data storage evolution is driven by the creation of more data, every day.  When one technology fails to keep pace with the growth, another one steps up to the challenge.  Like I have said, LTO tape has a pretty solid path forward for easily 6 more years of breakthrough advancements. In 6 years, I’m sure there will be some new technology working to knock out LTO, some new technology that today is just an idea. 

We see more and more companies getting hit every day with ransomware / data theft due to hackers, what are your thoughts on this and where do you see things going with this. Will we ever reach a point where this will start to level off or become less common?

Ransomware and cyber security are the hot topics keeping IT Directors and business owners up at night. It is a criminal activity that is highly lucrative. Criminals will continue to attempt to steal data, block access and hold companies for ransom wherever they can.  But they prefer easy targets. As I mentioned earlier, Tape Solutions offer one key advantage in this battle: if the data isn’t live on the network, the hacker has to work harder. This is a critical step to protect your data. 

For more information on Pat, data backup / storage, + more follow Pat on Twitter:

Scroll to top