Data Backup

BEST FREE AND PAID CLOUD STORAGE PROVIDERS IN 2022

Cloud storage and its benefits.

Cloud storage is like a virtual data center that is not operated by the company using it, instead, the cloud service provider provides the facilities of the data center from far away.

In cloud storage, the user’s data is copied multiple times and stored in several data centers so that the user can access data from another data center in case of a server failure. This way, the user can still access data if there is a power outage or any hardware failures, or any major natural disasters.

To use cloud services, the user will only need to pay for the storage area it wants to apply for and for the type of service without blocking any area in the company’s premises to have access to the storage area.

Cloud services are provided to the companies through a web surface interface and the user is not required to own huge systems.

Companies having access to the cloud services do not need to worry about maintaining the data center infrastructure on a regular device or allocate an amount of budget to have data center facilities.

Stealing data and information has always been in there in every industry. It can be prevented but not entirely. To prevent the loss of entire data, companies need to have trained, quick decision-making IT professionals. Cloud servers are well equipped with IT professionals since it’s their primary service provided to their customers. Apart from that, it is not possible for individual companies to keep their update their IT infrastructure updated.

Using cloud services will reduce capital expenses for the company.

If every company uses their own IT infrastructure for data, then there will be a high consumption of energy whereas a single cloud service provider can cater data center facilities to so many companies at a time keeping the energy consumption low.

In order to use more storage companies, a company needs to contact their cloud service provider to change their subscription plan to gain more access to the storage area with just an increase in the rate to acquire the subscription.

Difference between free cloud storage and paid cloud storage.

In free cloud storage services, some famous paid cloud service providers provide cloud services for free. To upload the data, the user needs to have an internet connection.

Having free cloud storage, allows the user to have their data in backup which will be protected and can be accessed through many devices.

This is beneficial for those with less storage capacity in their devices and who don’t want to invest in a storage medium. Free cloud storage will help the user to store some of their, media and files in the cloud and create free storage space in their device. The media and file will still be accessible to the user by just using an internet service. This way, the user will be able to store important media and files and prevent having them accidentally deleted.

In free cloud service, data can be accessed from anywhere without the need to pay any charge, just sign up to the company’s account.

Although while signing up for free cloud services has the disadvantage of having access to a limited data storage space and to have more storage space, the user will need to pay for more space. 

A common thing about paid and free cloud services is that to have a better in any one of them, the user will need to purchase advanced service from the cloud service provider.

In paid cloud storage services, the providers allow the user to have more storage area and also more security, the user will be able to backup media and files from more than one device. 

Following are some cloud storage providers:

Google Cloud Free Program –

The user will get the following options:

90-day, $300 Free Trial – under this, new google cloud users or Google Maps Platforms users can avail free Google cloud and Google Maps Platform services for 90 days for trial along with $300 in free Cloud Billing Credits.

All Google Cloud users can avail free Google Cloud products like Compute Engine, Cloud Storage, and BigQuery within the specified monthly usage limits, described by Google.

To store Google Maps, Google Maps Platform provides a recurring $200 monthly credit applying towards each Maps-related Cloud Billing account created by the user.

Google One – the storage will be shared through Google Drive, Gmail, and Google Photos.

Firstly, Google One allows its users to have access to 15 GB free. After which to have more storage space, the user will need to pay for

BASIC: $1.99 per month or $19.99 annually for 100 GB, which further includes access to google experts, an option to add family members of the user, and have benefits for an extra member but the family members are also required to live in the same country.

STANDARD: $2.99 per month or $29.99 annually for 200 GB.

This includes the same benefits as the BASIC package.

PREMIUM: $9.99 per month or $99.99 annually for 2 TB.

This includes the same benefits as the BASIC package and gives the user a VPN to use on their Android devices.

Amazon Web Services – it provides 160 cloud services. Under Free Tier services, after signing up for an account, the user avail services based upon their needs. There are some services, under Free Tier, which are free for 12 months, some services are always free whereas some have a trial period after which the user needs to purchase a membership to continue availing the services.

Microsoft Azure Free Account – when a user signs up for Azure with a free account, they get USD 200 credit for the first 30 days. It also includes two groups of services including popular services free for 12 months and other 25 services which will be always free.

Microsoft Azure has Pricing Calculator which allows potential service buyer to calculate their pricing estimates based on their existing workloads.

OneDrive: based on the potential user’s preferences, the buyer can opt for a package for home or for business.

In Microsoft 365 for family, the buyer can have a trial for a month ranging from 1 person to 6 people. In Microsoft 365 Business, the number of users depends on the type of plan.

IBM Cloud – IBM also provides storage options with always-free options or options with free trials after which the user needs to apply for services allotting a credit amount before starting the trial period.

iCloud storage: when a person signs up, the user is automatically enrolled for free 5GB of storage to keep media and files.

After using all of the 5GB storage space in iCloud, the user can upgrade to iCloud+ and also allows the user to share iCloud with their family.

Oracle: it provides a free time-limited trial to help the user explore services provided by Oracle Cloud Infrastructure products along with a few lifetime free services. The potential user will have $300 worth of cloud credits valid for 30 days.

DropBox: the free plan in this is suitable for those minimal requirements of storage since with free access, DropBox provides 2 GB of space and along with other benefits, if a user accidentally deletes a file from DropBox, the file can be restored from DropBox within 30 days.

To have more storage space with DropBox, the user can upgrade to paid plans.

Both are safe options to store personal, media, and files but a paid cloud membership is suitable for businesses required to protect more sensitive files than an individual person. That is why, free cloud servers and advised for personal use.

HOW IS BLOCKCHAIN DISRUPTING THE CLOUD STORAGE INDUSTRY?

What is blockchain and why people are using it?

It is a distributed database shared through nodes of a computer network. Blockchain helps to store the information electronically in a digital format. Blockchain is known for being used in cryptocurrency systems, such as Bitcoin. It helps in creating a secure and decentralized record of transactions.

Blockchain claims to guarantee the fidelity and security of the recorded data and trust without involving a trusted third party.

In the blockchain, data is stored in sets known as blocks holding sets of information. These blocks have a fixed amount of storage capacity and are closely linked with the previous block, therefore, forming a blockchain. When new information needs to be recorded, a new block is formed and after the information has been recorded, the block gets added to the chain.

Traditionally in databases, data are recorded in tables whereas, in blockchain, databases are formed into blocks. Each block creates a timestamp in the data structure. When a block is added to the chain, as a result, it creates a fixed timeline of data result, data structure creates an irreversible timeline of data which becomes fixed in the timeline. 

Blockchain is preferred due to various reasons. 

Blockchain is used in transactional fields, being approved by thousands of computers. This helps in eliminating human involvement. Blockchain doesn’t require to have the verification process done by a human. Even if a mistake, due to being separate blocks, the error will not spread out.

Just like eliminating the need for humans to verify, similarly, blockchain removes any need for a trusted third-party verification and thus eliminates the cost that comes with it. When doing the payment, payment processing companies incur a charge but blockchain helps in eliminating them as well.

Information stored in the blockchain is not located in its central location. Information is spread throughout various computers. This step reduces the chances of losing data since if a copy of the blockchain is breached then only a single copy of the information will be with the cyber attackers and the whole network will not be compromised.

Blockchain provides quick deposits all day and every day. This is helpful if money needs to be transferred or deposited to a bank in different time zones. 

Blockchain networks are confidential networks and not just anonymous. When transactions are made using blockchain, a person with the internet can view the list of transaction history but the person will not be able to access any information about the use nor can the user be identified. 

To store in blockchain about the transaction, a unique key or a public key is added to the blockchain on behalf of the transaction detail recorded in the blockchain.

After the transactions are recorded, they need to get verified by the blockchain network. When information is verified by the blockchain network then the information gets added to the blockchain. 

Most blockchain is entirely open-source software. This means it can be accessed by anyone and can be viewed by anyone which enables to review of cryptocurrencies. Thus, there is no hidden information about who controls Bitcoin or how is it edited. Hence, anybody can suggest new changes, and later on, if companies accept the idea, then the idea will be updated.

Several types of industries have started adopting blockchain in their companies. 

What is cloud storage and why do people use it?

Cloud storage help businesses and consumers to have a secure online place to store data. Having data online allows the user to access the data from any location and also the data can be shared with those who have the authorization to access it. Cloud storage also helps to back up data so that data can be recovered even from an off-site location.

Having access to cloud services allows the user to have upgraded subscription packages which will allow the user to have access to large storage sizes and additional cloud services.

Using cloud storage helps businesses to eliminate the need to buy data storage infrastructure which will help the company to have more space on the premises. Having cloud infrastructure eliminates the requirement to maintain the cloud infrastructure in the premises since cloud infrastructure will be maintained by the cloud service provider. The cloud servers will help companies to up their storage capacity whenever required just by changing the subscription plan. 

Cloud enables its users to collaborate with their colleagues which means that the users can work remotely and even after or before business hours. This is because users can access files anytime if they are authorized to. Cloud servers can be accessed with mobile data as well therefore using cloud storage to store files will bring a positive effect on the environment since there will be less consumption of energy.

Therefore, by eliminating the need to have employees for the on-premises data center, the company can employ for the tasks which have higher priorities.

Cloud computing provides various services such as 

  • Infrastructure as a Service,
  • Platform as a Service,
  • Software as a Service.

Difference between blockchain and cloud storage?

Where data can be accessed through the cloud anytime, in blockchain, it uses different styles of encryption along with hash to store data in protected databases. 

In the cloud, storage data are mutable whereas, in blockchain technology, data are not mutable. 

Cloud storage provides services in three formats and in blockchain it eliminates the need to use a trusted third party.

Cloud computing is centralized which means that all the data are stored in the company’s centralized set of data centers where blockchain follows decentralization.

A user can choose their data to be either public or private or a combination of both but in blockchain, its main feature is providing transparency of data.

Cloud computing follows the traditional method of database structure data stored will reside in the machines involving participants. Whereas, blockchain claims to be incorruptible where online data registry is reliable with various transactions. This states that participants using blockchain technology can alter the data by taking necessary approval from each party involved in the transaction.

Following are the companies which provide cloud computing services:

Google, IBM, Microsoft, Amazon Web Services, and Alibaba Cloud.

Following are the projects which use blockchain technology:

Ethereum, Bitcoin, Hyperledger Fabric, and Quorum.

How is blockchain disrupting the cloud storage industry?

Mainly why blockchain is moving ahead with progress and is getting more preference is due to the fact that it is more secure due to the elimination of trusted third parties. Also keeping the data in a decentralized manner also makes the blockchain technology more secure. Not to forget that data gets secured in a block thus, cyber attackers can’t access the whole chain of data since they are separated and need different unique keys. Therefore, blockchain is less vulnerable to attackers and there is reduced systematic damage and widespread data loss. 

Also, it is next to impossible if someone wants to alter the data since the transactions are governed by a code and it is not controlled by a third party. 

Many companies have jumped to providing blockchain services along with their cloud services. That is because providing blockchain services cost less expensive as many small organizations collaborate and provide the shared computing power and space to store data. 

Following are some companies that are using blockchain technology, as per 101Blockchains:

Unilever, Ford, FDA, DHL, AIA Group, MetLife, American International Group, etc.

Salesforce has launched Salesforce Blockchain which is built on CRM software. 

Storj provides blockchain technology services enabled with cloud storage networks which help in facilitating better security and lowering the cost of transactions for storing information in the cloud.

Microsoft’s Project Natick: The Underwater Data Center of the Future

When you think of underwater, deep-sea adventures, what is something that comes to mind? Colorful plants, odd looking sea creatures, and maybe even a shipwreck or two; but what about a data center? Moving forward, under-water datacenters may become the norm, and not so much an anomaly. Back in 2018, Microsoft sunk an entire data center to the bottom of the Scottish sea, plummeting 864 servers and 27.6 petabytes of storage. After two years of sitting 117 feet deep in the ocean, Microsoft’s Project Natick as it’s known, has been brought to the surface and deemed a success.

What is Project Natick?

 

Microsoft’s Project Natick was thought up back in 2015 when the idea of submerged servers could have a significant impact on lowering energy usage. When the original hypothesis came to light, Microsoft it immersed a data center off the coast of California for several months as a proof of concept to see if the computers would even endure the underwater journey. Ultimately, the experiment was envisioned to show that portable, flexible data center placements in coastal areas around the world could prove to scale up data center needs while keeping energy and operation costs low. Doing this would allow companies to utilize smaller data centers closer to where customers need them, instead of routing everything to centralized hubs. Next, the company will look into the possibilities of increasing the size and performance of these data centers by connecting more than one together to merge their resources.

What We Learned from Microsoft’s Undersea Experiment

After two years of being submerged, the results of the experiment not only showed that using offshore underwater data centers appears to work well in regards to overall performance, but also discovered that the servers contained within the data center proved to be up to eight times more reliable than their above ground equivalents. The team of researchers plan to further examine this phenomenon and exactly what was responsible for this greater reliability rate. For now, steady temperatures, no oxygen corrosion, and a lack of humans bumping into the computers is thought to be the reason. Hopefully, this same outcome can be transposed to land-based server farms for increased performance and efficiency across the board.

Additional developments consisted of being able to operate with more power efficiency, especially in regions where the grid on land is not considered reliable enough for sustained operation. It also will take lessons on renewability from the project’s successful deployment, with Natick relying on wind, solar, and experimental tidal technologies. As for future underwater servers, Microsoft acknowledged that the project is still in the infant stages. However, if it were to build a data center with the same capabilities as a standard Microsoft Azure it would require multiple vessels.

Do your data centers need servicing?

The Benefits of Submersible Data Centers

 

The benefits of using a natural cooling agent instead of energy to cool a data center is an obvious positive outcome from the experiment. When Microsoft hauled its underwater data center up from the bottom of the North Sea and conducted some analysis, researchers also found the servers were eight time more reliable than those on land.

The shipping container sized pod that was recently pulled from 117 feet below the North Sea off Scotland’s Orkney Islands was deployed in June 2018. Throughout the last two years, researchers observed the performance of 864 standard Microsoft data center servers installed on 12 racks inside the pod. During the experiment they also learned more about the economics of modular undersea data centers, which have the ability to be quickly set up offshore nearby population centers and need less resources for efficient operations and cooling. 

Natick researchers assume that the servers benefited from the pod’s nitrogen atmosphere, being less corrosive than oxygen. The non-existence of human interaction to disrupt components also likely added to increased reliability.

The North Sea-based project also exhibited the possibility of leveraging green technologies for data center operations. The data center was connected to the local electric grid, which is 100% supplied by wind, solar and experimental energy technologies. In the future, Microsoft plans to explore eliminating the grid connection altogether by co-locating a data center with an ocean-based green power system, such as offshore wind or tidal turbines.

Cyber Insurance in the Modern World

Yes, you read that correctly, cyber insurance is a real thing and it does exactly what is says. No, cyber insurance can’t defend your business from a cyber-attack, but it can keep your business afloat with secure financial support should a data security incident happen. Most organizations operate their business and reach out to potential customers via social media and internet-based transactions. Unfortunately, those modes of communication also serve as opportunities to cyber warfare. The odds are not in your favor, as cyberattacks are likely to occur and have the potential to cause serious losses for organizations both large and small. As part of a risk management plan, organizations regularly must decide which risks to avoid, accept, control or transfer. Transferring risk is where cyber insurance will pay massive dividends.

 

What is Cyber Insurance?

By definition, a cyber insurance policy, also known as cyber risk insurance (CRI) or cyber liability insurance coverage (CLIC), is meant to help an organization alleviate the risk of a cyber-related security breach by offsetting the costs involved with the recovery. Cyber insurance started making waves in 2005, with the total value of premiums projected to reach $7.5 billion by 2020. According to audit and assurance consultants PwC, about 33% of U.S. companies currently hold a cyber insurance policy. Clearly companies are feeling the need for cyber insurance, but what exactly does it cover? Dependent on the policy, cyber insurance covers expenses related to the policy holder as well as any claims made by third party casualties. 

Below are some common reimbursable expenses:

  • Forensic Investigation: A forensics investigation is needed to establish what occurred, the best way to repair damage caused and how to prevent a similar security breach from happening again. This may include coordination with law enforcement and the FBI.
  • Any Business Losses Incurred: A typical policy may contain similar items that are covered by an errors & omissions policy, as well as financial losses experienced by network downtime, business disruption, data loss recovery, and reputation repair.
  • Privacy and Notification Services: This involves mandatory data breach notifications to customers and involved parties, and credit monitoring for customers whose information was or may have been violated.
  • Lawsuits and Extortion Coverage: This includes legal expenses related to the release of confidential information and intellectual property, legal settlements, and regulatory fines. This may also include the costs associated from a ransomware extortion.

Like anything in the IT world, cyber insurance is continuously changing and growing. Cyber risks change often, and organizations have a tendency to avoid reporting the true effect of security breaches in order to prevent negative publicity. Because of this, policy underwriters have limited data on which to define the financial impact of attacks.

How do cyber insurance underwriters determine your coverage?

 

As any insurance company does, cyber insurance underwriters want to see that an organization has taken upon itself to assess its weaknesses to cyberattacks. This cyber risk profile should also show how the company and follows best practices by facilitating defenses and controls to protect against potential attacks. Employee education in the form of security awareness, especially for phishing and social engineering, should also be part of the organization’s security protection plan. 

Cyber-attacks against all enterprises have been increasing over the years. Small businesses tend to take on the mindset that they’re too small to be worth the effort of an attack. Quite the contrary though, as Symantec found that over 30% of phishing attacks in 2015 were launched against businesses with under 250 employees. Symantec’s 2016 Internet Security Threat Report indicated that 43% of all attacks in 2015 were targeted at small businesses.

You can download the Symantec’s 2016 Internet Security Threat Report here

The Centre for Strategic and International Studies estimates that the annual costs to the global economy from cybercrime was between $375 billion and $575 billion, with the average cost of a data breach costing larger companies over $3 million per incident. Every organization is different and therefore must decide whether they’re willing to risk that amount of money, or if cyber insurance is necessary to cover the costs for what they potentially could sustain.

As stated earlier in the article, cyber insurance covers first-party losses and third-party claims, whereas general liability insurance only covers property damage. Sony is a great example of when cyber insurance comes in handy. Sony was caught in the 2011 PlayStation hacker breach, with costs reaching $171M. Those costs could have been offset by cyber insurance had the company made certain that it was covered prior.

The cost of cyber insurance coverage and premiums are based on an organization’s industry, type of service they provided, they’re probability of data risks and exposures, policies, and annual gross revenue. Every business is very different so it best to consult with your policy provider when seeking more information about cyber-insurance.

HPE vs Dell: The Battle of the Servers

When looking at purchasing new servers for your organization, it can be a real dilemma deciding which to choose. With so many different brands offering so many different features, the current server industry may seem a bit saturated to some. Well this article does the hard work for you. We’ve narrowed down the list of server manufacturers to two key players: Dell and Hewlett Packard Enterprises (HPE). WE will help you with your next purchase decision by comparing qualities and features of each, such as: customer support, dependability, overall features, and cost. These are some of the major items to consider when investing in a new server. So, let’s begin.

Customer Support – Dell

The most beneficial thing regarding Dell customer support is that the company doesn’t require a paid support program to download any updates or firmware. Dell Prosupport is considered in the IT world as one of the more consistently reliable support programs in the industry. That being said, rumors have been circulating that Dell will soon be requiring a support contract for downloads in the future. 

You can find out more about Dell Prosupport here.

Customer Support – HPE

Unlike Dell, HPE currently requires businesses to have a support contract to download any new firmware or updates. It can be tough to find support drivers and firmware through HP’s platform even if you do have a contract in place. HPE’s website is a bit challenging to use in regard to finding information on support in general. On a brighter note, the support documentation provided is extremely thorough, and those with know-how can find manuals for essentially any thing you need. Though, by creating an online account through HPE‘s website one can gain access to HPE‘s 24/7 support, manage future orders, and the ability to utilize the HPE Operational Support Services experience. 

Customer Support Winner: Dell

Dependability – Dell

I’ll be the first to say that I’m not surprised whenever I hear about Dell servers running for years on end without any issues. Dell has always been very consistent as far as constantly improving their servers. Dell is the Toyota of the server world.

Dependability – HPE

Despite the reliability claims made for HPE’s superdome, apollo, and newer Proliant line of servers, HPE is known to have faults within the servers. In fact, a survey done mid-2017, HP Proliant’s had about 2.5x as much downtime as dell Poweredge servers. However, HPE does do a remarkable job with prognostic alerts for parts that are deemed to fail, giving businesses a n opportunity to repair or replace parts before they experience a down time.

Dependability Winner: Dell

Out of Band Management Systems

In regard to Out of Band Management systems, HPE’s system is known as Integrated Lights-Out (iLO), and Dell’s system is known as Integrated Dell Remote Access Controller (iDRAC). In the past there were some major differences between the two, but currently the IPMI implementations don’t differ enough to be a big determining factor. Both systems now provide similar features, such as HTML5 support. However, here are a few differences they do have.

Out of Band Management Systems – Dell

Dell’s iDRAC has progressed quite a bit in recent years. After iDRAC 7, java is no longer needed, yet the Graphic User Interface is not quite as nice as the one. iDRAC uses a physical license, which can be purchased on the secondary market and avoid being locked in again with the OEM after end of life. Updates are generally a bit longer with iDrac.

Out of Band Management Systems – HPE

HPE’s ILO advanced console requires a license, buy the standard console is included. Using the advanced console can ultimately lock you in with the OEM if your servers go to end of life. Unfortunately, they can’t be purchased on the secondary market. Although, it’s been noted that you only have to purchase one product key because the advanced key can be reused on multiple servers, this is against HPE’s terms of service. Generally, the GUI with ILO advanced appears more natural and the platform seems quicker.

Out of Band Management Systems Winner: HPE

Cost of Initial Investment- Dell

Price flexibility is almost nonexistent when negotiating with Dell, however with bigger, repeat customers Dell has been known to ease into more of a deal. In the past Dell was seen as being the more affordable option, but the initial cost of investment is nearly identical now. With Dell typically being less expensive, it tends to be the preference of enterprise professionals attempting to keep their costs low to increase revenue. Simply put, Dell is cheaper because it is so widely used, and everyone uses it because it’s more cost effective.

Cost of Initial Investment- HPE

HPE is generally more open to price negotiation, even though opening quotes are similar to Dell. Just like everything in business, your relationship with the vendor will be a much greater factor in determining price. Those that order in large quantities, more frequently, will usually have the upper hand in negotiations. That being said, HPE servers tend to be a little more expensive on average. When cost is not a factor, HPE leans to be the choice where long-term performance is the more important objective. HPE servers are supported globally through a number of channels. Due to the abundance of used HPE equipment in the market, replacement parts are fairly easy to come by. HPE also offer a more thorough documentation system, containing manuals for every little-known part HPE has ever made. HPE is enterprise class, whereas Dell is business class.

Cost of Initial Investment Winner: Tie

The Decisive Recap

When it really comes down to it, HPE and Dell are both very similar companies with comparable features. When assessing HPE vs Dell servers, there is no winner. There isn’t a major distinction between the companies as far as manufacturing quality, cost, or dependability. Those are factors that should be weighed on a case by case basis.

If you’re planning on replacing your existing hardware, sell your old equipment o us! We’d love to help you sell your used servers.

You can start by sending us a list of equipment you want sell. Not only do we buy used IT Equipment, we also offer the following services:

The Role of Cryptocurrencies in the Age of Ransomware

Now more than ever, there has become an obvious connection between the rising ransomware era and the cryptocurrency boom. Believe it or not, cryptocurrency and ransomware have an extensive history with one another. They are so closely linked, that many have attributed the rise of cryptocurrency with a corresponding rise in ransomware attacks across the globe. There is no debating the fact that ransomware attacks are escalating at an alarming rate, but there is no solid evidence showing a direct correlation to cryptocurrency. Even though the majority of ransoms are paid in crypto, the transparency of the currency’s block chain makes it a terrible place to keep stolen money.

The link between cryptocurrency and ransomware attacks

There are two keyways that ransomware attacks rely on the cryptocurrency market. First, the majority of the ransoms paid during these attacks are usually in cryptocurrency. A perfect example is with the largest ransomware attack in history, the WannaCry ransomware attacks. Attackers demanded their victims to pay nearly $300 of Bitcoin (BTC) to release their captive data..

A second way that cryptocurrencies and ransomware attacks are linked is through what is called “ransomware as a service”. Plenty of cyber criminals offer “ransomware as a service,” essentially letting anyone hire a hacker via online marketplaces. How do you think they want payment for their services? Cryptocurrency.

Read more about the WannaCry ransomware attacks here

Show Me the Money

From an outsider’s perspective, it seems clear why hackers would require ransom payments in cryptocurrency. The cryptocurrency’s blockchain is based on privacy and encryption, offering the best alternative to hide stolen money. Well, think again. There is actually a different reason why ransomware attacks make use of cryptocurrencies. The efficiency of cryptocurrency block chain networks, rather than its concealment, is what really draws the cyber criminals in.

The value of cryptocurrency during a cyber-attack is really the transparency of crypto exchanges. A ransomware attacker can keep an eye on the public blockchain to see if his victims have paid their ransom and can automate the procedures needed to give their victim the stolen data back. 

On the other hand, the cryptocurrency market is possibly the worst place to keep the stolen funds. The transparent quality of the cryptocurrency blockchain means that the world can closely monitor the transactions of ransom money. This makes it tricky to switch the stolen funds into an alternative currency, where they can be tracked by law enforcement.

Read about the recent CSU college system ransomware attack here

Law and Order

Now just because the paid ransom for stolen data can be tracked in the blockchain doesn’t automatically mean that the hackers who committed the crime can be caught too. Due to the anonymity of cryptocurrency it is nearly impossible for law enforcement agencies to find the true identity of cybercriminals, However, there are always exceptions to the rule. 

Blockchain allows a transaction to be traced relating to a given bitcoin address, all the way back to its original transaction. This permits law enforcement access to the financial records required to trace the ransom payment, in a way that would never be possible with cash transactions.

Due to several recent and prominent ransomware attacks, authorities have called for the cryptocurrency market to be watched more closely. In order to do so, supervision will need to be executed in a very careful manner, not to deter from the attractiveness of anonymity of the currency. 

Protect Yourself Anyway You Can

The shortage of legislative control of the cryptocurrency market, mixed with the quick rise in ransomware attacks, indicates that individuals need to take it upon themselves to protect their data. Some organizations have taken extraordinary approaches such as hoarding Bitcoin in case they need to pay a ransom as part of a future attack. 

For the common man, protecting against ransomware attacks means covering your bases. You should double check that all of your cyber security software is up to date, subscribe to a secure cloud storage provider and backup your data regularly. Companies of all sizes should implement the 3-2-1 data backup strategy in the case of a ransomware attack. The 3-2-1 backup plan states that one should have at least three different copies of data, stored on at least 2 different types of media, with at least one copy offsite. It helps to also have a separate copy of your data stored via the air-gap method, preventing it from ever being stolen.

Learn More About Getting Your 3-2-1 Backup Plan in Place

TapeChat with Pat

At DTC, we value great relationships. Luckily for us, we have some of the best industry contacts out there when it comes to tape media storage & backup. Patrick Mayock, a Partner Development Manager at Hewlett Packard Enterprise (HPE) is one of those individuals. Pat has been with HPE for the last 7 years and prior to that has been in the data backup / storage industry for the last 30 years. Pat is our go to guy at HPE, a true source of support, and overall great colleague. For our TapeChat series Pat was our top choice. Pat’s resume is an extensive one that would impress anyone who see’s it. Pat started his data / media storage journey back in the early 90’s in the bay area. Fast forward to today Pat can be found in the greater Denver area with the great minds over at HPE. Pat knows his stuff so sit back and enjoy this little Q&A we setup for you guys. We hope you enjoy and without further adieu, we welcome you to our series, TapeChat (with Pat)!

Pat, thank you for taking the time to join us digitally for this online Q&A. We would like to start off by stating how thrilled we are to have you with us. You’re an industry veteran and we’re honored to have you involved in our online content.

Thanks for the invite.  I enjoy working with your crew and am always impressed by your innovative strategies to reach out to new prospects and educate existing customers on the growing role of LTO tape from SMB to the Data Center. 

Let’s jump right into it! For the sake of starting things out on a fun note, what is the craziest story or experience you have had or know of involving the LTO / Tape industry? Maybe a fun fact that most are unaware of, or something you would typically tell friends and family… Anything that stands out…

I’ve worked with a few tape library companies over the years and before that I sold the original 9 track ½ inch tape drives.  Those were monsters, but you would laugh how little data they stored on a reel of tape. One of the most memorable projects I worked on was in the Bay Area, at Oracle headquarters.  They had the idea to migrate from reel to reel tape drives with a plan to replace them with compact, rack mounted, ‘robotic’ tape libraries.  At the end, they replaced those library type shelves, storing hundreds of reels of tape with 32 tape libraries in their computer cabinets.  Each tape library had room for 40 tape slots and four 5 ¼ full high tape drives.  The contrast was impressive.  To restore data, they went from IT staffers physically moving tape media, in ‘sneaker mode’ to having software locate where the data was stored, grab and load the tape automatically in the tape library and start reading data.   Ok, maybe too much of a tape story, but as a young sales rep at the time it was one that I’ll never forget. 

With someone like yourself who has been doing this for such a long time, what industry advancements and releases still get you excited to this day? What is Pat looking forward to right now in the LTO Tape world?

I’m lucky.  We used to have five or more tape technologies all fighting for their place in the data protection equation, each from a different vendor. Now, Ultrium LTO tape has a majority of the market and is supported by a coalition of multiple technology vendors working together to advance the design. Some work in the physical tape media, some on the read/write heads, and some on the tape drive itself.  The business has become more predictable and more reliable.  About every two years the consortium releases the next level of LTO tape technology.  We will see LTO-9 technology begin public announcements by the end of 2020. And the thirst for higher storage capacity and higher performance in the same physical space, this is what keeps me more than optimistic about the future.

When our sales team is making calls and asks a business if they are still backing up to LTO Tape, that question is always met with such an unappreciated / outdated response, in some cases we receive a response of laughter with something along the lines of “people still use tape” as a response. Why do you think LTO as a backup option is getting this type of response? What is it specifically about the technology that makes businesses feel as if LTO Tape is a way of the past…

As a Tape Guy, I hear that question a lot.  The reality in the market is that some industries are generating so much data that they have to increase their dependence on tape based solutions as part of their storage hierarchy. It starts with just the cost comparison of data on a single disk drive versus that same amount of data on a LTO tape cartridge. LTO tape wins. But the real impact is some much bigger than just that.  Think about the really large data center facilities.  The bigger considerations are for instance, for a given amount of data (a lot) what solution can fit the most data in to a cabinet size solution.  Physical floor space in the data center is at a premium.  Tape wins. Then consider the cost of having that data accessible.  A rack of disk drives consume tons more energy that a tape library. Tape wins again. Then consider the cooling cost that go along with all those disk drives spinning platters.  Tape wins, creating a greener solution that is more cost effective. At HPE and available from DTC, we have white papers and presentations on just this topic of cost savings.   In summary, if a company is not looking at or using LTO tape, then their data retention, data protection and data archiving needs are just not yet at the breaking point. 

There seems to be an emergence of the Disk / Hard Drive backup option being utilized by so many businesses. Do you feel like LTO Tape will ever be looked at with the same level of respect or appreciation by those same businesses?

If you are talking about solid state disk for high access, and dedicated disk drive solutions for backup – sure that works.  But at some point you need multiple copies at multiple locations to protect your investment.  The downside of most disk only solutions is that all the data is accessible across the network.  Now days, Ransomware and CyberSecurity are part of the biggest threats to corporations, government agencies and even mom and pop SMBs.  The unique advantage of adding LTO tape based tape libraries is that the data is NOT easily tapped into because the physical media in not in the tape drive.  Again, HPE has very detailed white papers and presentations on this Air Gap principle, all available from DTC. 

LTO Tape vs Hard Drive seems to be the big two in terms of the data / backup realm, as an insider to this topic, where do you see this battle going in the far future?

It’s less of a battle and more of a plan to ‘divide the work load and let’s work together’.  In most environments, tape and disk work side by side with applications selecting where the data is kept. However, there are physical limitations on how much space is available on a spinning platter or set of platters, and this will dramatically slow down the growth of their capacity within a given form factor. With LTO tape technology, the physical areal footprint is so much bigger, because of the thousands of feet of tape within each tape cartridge. At LTO-8 we have 960 meters of tape to write on. Even at a half inch wide, that’s a lot of space for data. Both disk and tape technologies will improve how much data they can fit on their media, (areal density) but LTO tape just has the advantage of so much space to work with. LTO tape will continue to follow the future roadmap which is already spec’d out to LTO-12.  

With so many years in this industry, what has been the highlight of your career?

The technology has always impressed me, learning and talking about the details of a particular technical design advantage. Then, being able to work with a wide range of IT specialists and learning about their business and what they actually do with the data.  But when I look back, on the biggest highlights,  I remember all the great people that I have worked with side by side to solve customer’s storage and data protection problems.  Sometimes we won, sometimes we didn’t.  I will never forget working to do our best for the deal. 

What tech advancements do you hope to see rolled out that would be a game changer for data storage as a whole?

The data storage evolution is driven by the creation of more data, every day.  When one technology fails to keep pace with the growth, another one steps up to the challenge.  Like I have said, LTO tape has a pretty solid path forward for easily 6 more years of breakthrough advancements. In 6 years, I’m sure there will be some new technology working to knock out LTO, some new technology that today is just an idea. 

We see more and more companies getting hit every day with ransomware / data theft due to hackers, what are your thoughts on this and where do you see things going with this. Will we ever reach a point where this will start to level off or become less common?

Ransomware and cyber security are the hot topics keeping IT Directors and business owners up at night. It is a criminal activity that is highly lucrative. Criminals will continue to attempt to steal data, block access and hold companies for ransom wherever they can.  But they prefer easy targets. As I mentioned earlier, Tape Solutions offer one key advantage in this battle: if the data isn’t live on the network, the hacker has to work harder. This is a critical step to protect your data. 

For more information on Pat, data backup / storage, + more follow Pat on Twitter:

The TikTok Controversy: How Much Does Big Tech Care About Your Data and its Privacy?

If you have a teenager in your house, you’ve probably encountered them making weird dance videos in front of their phone’s camera. Welcome to the TikTok movement that’s taking over our nation’s youth. TikTok is a popular social media video sharing app that continues to make headlines due to cybersecurity concerns. Recently, the U.S. military banned its use on government phones following a warning from the DoD about potential personal information risk. TikTok has now verified that it patched multiple vulnerabilities that exposed user data. In order to better understand TikTok’s true impact on data and data privacy, we’ve compiled some of the details regarding the information TikTok gathers, sends, and stores.

What is TikTok?

TikTok is a video sharing application that allows users to create short, fifteen-second videos on their phones and post the content to a public platform. Videos can be enriched with music and visual elements, such as filters and stickers. By having a young adolescent demographic, along with the content that is created and shared on the platform, have put the app’s privacy features in the limelight as of late. Even more so, questions the location of TikTok data storage and access have raised red flags.

You can review TikTok’s privacy statement for yourself here.

TikTok Security Concerns

Even though TikTok allows users to control who can see their content, the app does ask for a number of consents on your device. Most noteworthy, it accesses your location and device information. However, there’s no evidence to support the theory of malicious activity or that TikTok is violating their privacy policy, it is still advised to practice caution with the content that’s both created and posted.

The biggest concern surrounding the TikTok application is where user information is stored and who has access to it. According the TikTok website, “We store all US user data in the United States, with backup redundancy in Singapore. Our data centers are located entirely outside of China, and none of our data is subject to Chinese law.” “The personal data that we collect from you will be transferred to, and stored at, a destination outside of the European Economic Area (“EEA”).” There is no other specific information regarding where user data is stored.

Recently, TikTok published a Transparency Report which lists “legal requests for user information”, “government requests for content removal”, and “copyrighted content take-down notices”. The “Legal Requests for User Information” shows that India, the United States, and Japan are the top three countries where user information was requested. The United States was the number one country with fulfilled request (86%) and number of accounts specified in the requests (255). Oddly enough, China is not listed as having received any requests for user information. 

What Kind of Data is TikTok Tracking?

Below are some of the consents TikTok requires on Android and iOS devices after installation of the app is completed. While some of the permissions are to be expected, these are all consistent with TikTok’s written privacy policy. When viewing all that TikTok gathers from its users, it can be alarming. In short, the app allows TikTok to:

  • Access the camera (and take pictures/video), the microphone (and record sound), the device’s WIFI connection, and the full list of contacts on your device.
  • Determine if the internet is available and access it if it is.
  • Keep the device turned on and automatically start itself.
  • Secure detailed information on the user’s location using GPS.
  • Read and write to the device’s storage, install/remove shortcuts, and access the flashlight (turn it off and on).

You read that right, TikTok has full access to your audio, video, and list of contacts in your phone. The geo location tracking via GPS is somewhat surprising though, especially since TikTok videos don’t display location information. So why collect that information? If you operate and Android device, TikTok has the capability of accessing other apps running at the same time, which can give the app access to data in another app such as a banking or password storage app. 

Why is TikTok Banned by the US Military?

In December 2019, the US military started instructing soldiers to stop using TikToK on all government-owned phones. This TikTok policy reversal came just shortly after the release of a Dec. 16 Defense Department Cyber Awareness Message classifying TikTok as having potential security risks associated with its use. As the US military cannot prevent government personnel from accessing TiKTok on their personal phones, the leaders recommended that service members use caution if unfamiliar text messages are received.

In fact, this was not the first time that the Defense Department had been required to encourage service members to remove a popular app from their phones. In 2016, the Defense Department banned the augmented-reality game, Pokémon Go, from US military owned smartphones. However, this case was a bit different as military officials alluded to concerns over productivity and the potential distractions it could cause. The concerns over TikTok are focused on cybersecurity and spying by the Chinese government.

In the past, the DoD has put out more general social media guidelines, advising personnel to proceed with caution when using any social platform. And all DoD personnel are required to take annual cyber awareness training that covers the threats that social media can pose.

3-2-1 Backup Rule

What is the 3-2-1 Backup Rule?

 

The 3-2-1 backup rule is a concept made famous by photographer Peter Krogh. He basically said there are two types of people: those who have already had a storage failure and those who will have one in the future. Its inevitable. The 3-2-1 backup rule helps to answer two important questions: how many backup files should I have and where should I store them?

The 3-2-1 backup rule goes as follows:

  • Have at least three copies of your data.
  • Store the copies on two different media.
  • Keep one backup copy offsite.

Need help with building your data backup strategy?

  1. Create at least THREE different copies of your data

Yes, I said three copies. That means that in addition to your primary data, you should also have at least two more backups that you can rely on if needed. But why isn’t one backup sufficient you ask? Think about keeping your original data on storage device A and its backup is on storage device B. Both storage devices have the same characteristics, and they have no common failure causes. If device A has a probability of failure that’s 1/100 (and the same is true for device B), then the probability of failure of both devices at the same time is 1/10,000.

So with THREE copies of data, if you have your primary data (device A) and two backups of it (device B and device C), and if all devices have the same characteristics and no common failure causes, then the probability of failure of all three devices at the same time will be 1/1,000,000 chance of losing all of your data. That’s much better than having only one copy and a 1/100 chance of losing it all, wouldn’t you say? Creating more than two copies of data also avoids a situation where the primary copy and its backup copy are stored in the same physical location, in the event of a natural disaster.

  1. Store your data on at least TWO different types of media

Now in the last scenario above we assumed that there were no common failure causes for all of the devices that contain your precious data. Clearly, this requirement is much harder to fulfill if your primary data and its backup are located in the same place. Disks from the same RAID aren’t typically independent. Even more so, it is not uncommon to experience failure of one or more disks from the same storage compartment around the same time.

This is where the #2 comes in 3-2-1 rule. It is recommended that you keep copies of your data on at least TWO different storage types. For example, internal hard disk drives AND removable storage media such as tapes, external hard drives, USB drives, od SD-cards. It is even possible to keep data on two internal hard disk drives in different storage locations.

 

Learn more about purchasing tape media to expand your data storage strategy 

  1. Store at least ONE of these copies offsite

Believe it or not, physical separation between data copies is crucial. It’s bad idea to keep your external storage device in the same room as your primary storage device. Just ask the numerous companies that are located in the path of a tornado or in a flood zone. Or what would you do if your business caught fire? If you work for a smaller company with only one location, storing your backups to the cloud would be a smart alternative. Tapes that are stored at an offsite location are also popular among companies of all sizes.

 

Every system administrator should have a backup. This principle works for any virtual environment; regardless of the system you are running, backup is king!

Scroll to top