Data Backup

    Microsoft’s Project Natick: The Underwater Data Center of the Future

    When you think of underwater, deep-sea adventures, what is something that comes to mind? Colorful plants, odd looking sea creatures, and maybe even a shipwreck or two; but what about a data center? Moving forward, under-water datacenters may become the norm, and not so much an anomaly. Back in 2018, Microsoft sunk an entire data center to the bottom of the Scottish sea, plummeting 864 servers and 27.6 petabytes of storage. After two years of sitting 117 feet deep in the ocean, Microsoft’s Project Natick as it’s known, has been brought to the surface and deemed a success.

    What is Project Natick?

     

    Microsoft’s Project Natick was thought up back in 2015 when the idea of submerged servers could have a significant impact on lowering energy usage. When the original hypothesis came to light, Microsoft it immersed a data center off the coast of California for several months as a proof of concept to see if the computers would even endure the underwater journey. Ultimately, the experiment was envisioned to show that portable, flexible data center placements in coastal areas around the world could prove to scale up data center needs while keeping energy and operation costs low. Doing this would allow companies to utilize smaller data centers closer to where customers need them, instead of routing everything to centralized hubs. Next, the company will look into the possibilities of increasing the size and performance of these data centers by connecting more than one together to merge their resources.

    What We Learned from Microsoft’s Undersea Experiment

    After two years of being submerged, the results of the experiment not only showed that using offshore underwater data centers appears to work well in regards to overall performance, but also discovered that the servers contained within the data center proved to be up to eight times more reliable than their above ground equivalents. The team of researchers plan to further examine this phenomenon and exactly what was responsible for this greater reliability rate. For now, steady temperatures, no oxygen corrosion, and a lack of humans bumping into the computers is thought to be the reason. Hopefully, this same outcome can be transposed to land-based server farms for increased performance and efficiency across the board.

    Additional developments consisted of being able to operate with more power efficiency, especially in regions where the grid on land is not considered reliable enough for sustained operation. It also will take lessons on renewability from the project’s successful deployment, with Natick relying on wind, solar, and experimental tidal technologies. As for future underwater servers, Microsoft acknowledged that the project is still in the infant stages. However, if it were to build a data center with the same capabilities as a standard Microsoft Azure it would require multiple vessels.

    Do your data centers need servicing?

    The Benefits of Submersible Data Centers

     

    The benefits of using a natural cooling agent instead of energy to cool a data center is an obvious positive outcome from the experiment. When Microsoft hauled its underwater data center up from the bottom of the North Sea and conducted some analysis, researchers also found the servers were eight time more reliable than those on land.

    The shipping container sized pod that was recently pulled from 117 feet below the North Sea off Scotland’s Orkney Islands was deployed in June 2018. Throughout the last two years, researchers observed the performance of 864 standard Microsoft data center servers installed on 12 racks inside the pod. During the experiment they also learned more about the economics of modular undersea data centers, which have the ability to be quickly set up offshore nearby population centers and need less resources for efficient operations and cooling. 

    Natick researchers assume that the servers benefited from the pod’s nitrogen atmosphere, being less corrosive than oxygen. The non-existence of human interaction to disrupt components also likely added to increased reliability.

    The North Sea-based project also exhibited the possibility of leveraging green technologies for data center operations. The data center was connected to the local electric grid, which is 100% supplied by wind, solar and experimental energy technologies. In the future, Microsoft plans to explore eliminating the grid connection altogether by co-locating a data center with an ocean-based green power system, such as offshore wind or tidal turbines.

    Cyber Insurance in the Modern World

    Yes, you read that correctly, cyber insurance is a real thing and it does exactly what is says. No, cyber insurance can’t defend your business from a cyber-attack, but it can keep your business afloat with secure financial support should a data security incident happen. Most organizations operate their business and reach out to potential customers via social media and internet-based transactions. Unfortunately, those modes of communication also serve as opportunities to cyber warfare. The odds are not in your favor, as cyberattacks are likely to occur and have the potential to cause serious losses for organizations both large and small. As part of a risk management plan, organizations regularly must decide which risks to avoid, accept, control or transfer. Transferring risk is where cyber insurance will pay massive dividends.

     

    What is Cyber Insurance?

    By definition, a cyber insurance policy, also known as cyber risk insurance (CRI) or cyber liability insurance coverage (CLIC), is meant to help an organization alleviate the risk of a cyber-related security breach by offsetting the costs involved with the recovery. Cyber insurance started making waves in 2005, with the total value of premiums projected to reach $7.5 billion by 2020. According to audit and assurance consultants PwC, about 33% of U.S. companies currently hold a cyber insurance policy. Clearly companies are feeling the need for cyber insurance, but what exactly does it cover? Dependent on the policy, cyber insurance covers expenses related to the policy holder as well as any claims made by third party casualties. 

    Below are some common reimbursable expenses:

    • Forensic Investigation: A forensics investigation is needed to establish what occurred, the best way to repair damage caused and how to prevent a similar security breach from happening again. This may include coordination with law enforcement and the FBI.
    • Any Business Losses Incurred: A typical policy may contain similar items that are covered by an errors & omissions policy, as well as financial losses experienced by network downtime, business disruption, data loss recovery, and reputation repair.
    • Privacy and Notification Services: This involves mandatory data breach notifications to customers and involved parties, and credit monitoring for customers whose information was or may have been violated.
    • Lawsuits and Extortion Coverage: This includes legal expenses related to the release of confidential information and intellectual property, legal settlements, and regulatory fines. This may also include the costs associated from a ransomware extortion.

    Like anything in the IT world, cyber insurance is continuously changing and growing. Cyber risks change often, and organizations have a tendency to avoid reporting the true effect of security breaches in order to prevent negative publicity. Because of this, policy underwriters have limited data on which to define the financial impact of attacks.

    How do cyber insurance underwriters determine your coverage?

     

    As any insurance company does, cyber insurance underwriters want to see that an organization has taken upon itself to assess its weaknesses to cyberattacks. This cyber risk profile should also show how the company and follows best practices by facilitating defenses and controls to protect against potential attacks. Employee education in the form of security awareness, especially for phishing and social engineering, should also be part of the organization’s security protection plan. 

    Cyber-attacks against all enterprises have been increasing over the years. Small businesses tend to take on the mindset that they’re too small to be worth the effort of an attack. Quite the contrary though, as Symantec found that over 30% of phishing attacks in 2015 were launched against businesses with under 250 employees. Symantec’s 2016 Internet Security Threat Report indicated that 43% of all attacks in 2015 were targeted at small businesses.

    You can download the Symantec’s 2016 Internet Security Threat Report here

    The Centre for Strategic and International Studies estimates that the annual costs to the global economy from cybercrime was between $375 billion and $575 billion, with the average cost of a data breach costing larger companies over $3 million per incident. Every organization is different and therefore must decide whether they’re willing to risk that amount of money, or if cyber insurance is necessary to cover the costs for what they potentially could sustain.

    As stated earlier in the article, cyber insurance covers first-party losses and third-party claims, whereas general liability insurance only covers property damage. Sony is a great example of when cyber insurance comes in handy. Sony was caught in the 2011 PlayStation hacker breach, with costs reaching $171M. Those costs could have been offset by cyber insurance had the company made certain that it was covered prior.

    The cost of cyber insurance coverage and premiums are based on an organization’s industry, type of service they provided, they’re probability of data risks and exposures, policies, and annual gross revenue. Every business is very different so it best to consult with your policy provider when seeking more information about cyber-insurance.

    HPE vs Dell: The Battle of the Servers

    When looking at purchasing new servers for your organization, it can be a real dilemma deciding which to choose. With so many different brands offering so many different features, the current server industry may seem a bit saturated to some. Well this article does the hard work for you. We’ve narrowed down the list of server manufacturers to two key players: Dell and Hewlett Packard Enterprises (HPE). WE will help you with your next purchase decision by comparing qualities and features of each, such as: customer support, dependability, overall features, and cost. These are some of the major items to consider when investing in a new server. So, let’s begin.

    Customer Support – Dell

    The most beneficial thing regarding Dell customer support is that the company doesn’t require a paid support program to download any updates or firmware. Dell Prosupport is considered in the IT world as one of the more consistently reliable support programs in the industry. That being said, rumors have been circulating that Dell will soon be requiring a support contract for downloads in the future. 

    You can find out more about Dell Prosupport here.

    Customer Support – HPE

    Unlike Dell, HPE currently requires businesses to have a support contract to download any new firmware or updates. It can be tough to find support drivers and firmware through HP’s platform even if you do have a contract in place. HPE’s website is a bit challenging to use in regard to finding information on support in general. On a brighter note, the support documentation provided is extremely thorough, and those with know-how can find manuals for essentially any thing you need. Though, by creating an online account through HPE‘s website one can gain access to HPE‘s 24/7 support, manage future orders, and the ability to utilize the HPE Operational Support Services experience. 

    Customer Support Winner: Dell

    Dependability – Dell

    I’ll be the first to say that I’m not surprised whenever I hear about Dell servers running for years on end without any issues. Dell has always been very consistent as far as constantly improving their servers. Dell is the Toyota of the server world.

    Dependability – HPE

    Despite the reliability claims made for HPE’s superdome, apollo, and newer Proliant line of servers, HPE is known to have faults within the servers. In fact, a survey done mid-2017, HP Proliant’s had about 2.5x as much downtime as dell Poweredge servers. However, HPE does do a remarkable job with prognostic alerts for parts that are deemed to fail, giving businesses a n opportunity to repair or replace parts before they experience a down time.

    Dependability Winner: Dell

    Out of Band Management Systems

    In regard to Out of Band Management systems, HPE’s system is known as Integrated Lights-Out (iLO), and Dell’s system is known as Integrated Dell Remote Access Controller (iDRAC). In the past there were some major differences between the two, but currently the IPMI implementations don’t differ enough to be a big determining factor. Both systems now provide similar features, such as HTML5 support. However, here are a few differences they do have.

    Out of Band Management Systems – Dell

    Dell’s iDRAC has progressed quite a bit in recent years. After iDRAC 7, java is no longer needed, yet the Graphic User Interface is not quite as nice as the one. iDRAC uses a physical license, which can be purchased on the secondary market and avoid being locked in again with the OEM after end of life. Updates are generally a bit longer with iDrac.

    Out of Band Management Systems – HPE

    HPE’s ILO advanced console requires a license, buy the standard console is included. Using the advanced console can ultimately lock you in with the OEM if your servers go to end of life. Unfortunately, they can’t be purchased on the secondary market. Although, it’s been noted that you only have to purchase one product key because the advanced key can be reused on multiple servers, this is against HPE’s terms of service. Generally, the GUI with ILO advanced appears more natural and the platform seems quicker.

    Out of Band Management Systems Winner: HPE

    Cost of Initial Investment- Dell

    Price flexibility is almost nonexistent when negotiating with Dell, however with bigger, repeat customers Dell has been known to ease into more of a deal. In the past Dell was seen as being the more affordable option, but the initial cost of investment is nearly identical now. With Dell typically being less expensive, it tends to be the preference of enterprise professionals attempting to keep their costs low to increase revenue. Simply put, Dell is cheaper because it is so widely used, and everyone uses it because it’s more cost effective.

    Cost of Initial Investment- HPE

    HPE is generally more open to price negotiation, even though opening quotes are similar to Dell. Just like everything in business, your relationship with the vendor will be a much greater factor in determining price. Those that order in large quantities, more frequently, will usually have the upper hand in negotiations. That being said, HPE servers tend to be a little more expensive on average. When cost is not a factor, HPE leans to be the choice where long-term performance is the more important objective. HPE servers are supported globally through a number of channels. Due to the abundance of used HPE equipment in the market, replacement parts are fairly easy to come by. HPE also offer a more thorough documentation system, containing manuals for every little-known part HPE has ever made. HPE is enterprise class, whereas Dell is business class.

    Cost of Initial Investment Winner: Tie

    The Decisive Recap

    When it really comes down to it, HPE and Dell are both very similar companies with comparable features. When assessing HPE vs Dell servers, there is no winner. There isn’t a major distinction between the companies as far as manufacturing quality, cost, or dependability. Those are factors that should be weighed on a case by case basis.

    If you’re planning on replacing your existing hardware, sell your old equipment o us! We’d love to help you sell your used servers.

    You can start by sending us a list of equipment you want sell. Not only do we buy used IT Equipment, we also offer the following services:

    The Role of Cryptocurrencies in the Age of Ransomware

    Now more than ever, there has become an obvious connection between the rising ransomware era and the cryptocurrency boom. Believe it or not, cryptocurrency and ransomware have an extensive history with one another. They are so closely linked, that many have attributed the rise of cryptocurrency with a corresponding rise in ransomware attacks across the globe. There is no debating the fact that ransomware attacks are escalating at an alarming rate, but there is no solid evidence showing a direct correlation to cryptocurrency. Even though the majority of ransoms are paid in crypto, the transparency of the currency’s block chain makes it a terrible place to keep stolen money.

    The link between cryptocurrency and ransomware attacks

    There are two keyways that ransomware attacks rely on the cryptocurrency market. First, the majority of the ransoms paid during these attacks are usually in cryptocurrency. A perfect example is with the largest ransomware attack in history, the WannaCry ransomware attacks. Attackers demanded their victims to pay nearly $300 of Bitcoin (BTC) to release their captive data..

    A second way that cryptocurrencies and ransomware attacks are linked is through what is called “ransomware as a service”. Plenty of cyber criminals offer “ransomware as a service,” essentially letting anyone hire a hacker via online marketplaces. How do you think they want payment for their services? Cryptocurrency.

    Read more about the WannaCry ransomware attacks here

    Show Me the Money

    From an outsider’s perspective, it seems clear why hackers would require ransom payments in cryptocurrency. The cryptocurrency’s blockchain is based on privacy and encryption, offering the best alternative to hide stolen money. Well, think again. There is actually a different reason why ransomware attacks make use of cryptocurrencies. The efficiency of cryptocurrency block chain networks, rather than its concealment, is what really draws the cyber criminals in.

    The value of cryptocurrency during a cyber-attack is really the transparency of crypto exchanges. A ransomware attacker can keep an eye on the public blockchain to see if his victims have paid their ransom and can automate the procedures needed to give their victim the stolen data back. 

    On the other hand, the cryptocurrency market is possibly the worst place to keep the stolen funds. The transparent quality of the cryptocurrency blockchain means that the world can closely monitor the transactions of ransom money. This makes it tricky to switch the stolen funds into an alternative currency, where they can be tracked by law enforcement.

    Read about the recent CSU college system ransomware attack here

    Law and Order

    Now just because the paid ransom for stolen data can be tracked in the blockchain doesn’t automatically mean that the hackers who committed the crime can be caught too. Due to the anonymity of cryptocurrency it is nearly impossible for law enforcement agencies to find the true identity of cybercriminals, However, there are always exceptions to the rule. 

    Blockchain allows a transaction to be traced relating to a given bitcoin address, all the way back to its original transaction. This permits law enforcement access to the financial records required to trace the ransom payment, in a way that would never be possible with cash transactions.

    Due to several recent and prominent ransomware attacks, authorities have called for the cryptocurrency market to be watched more closely. In order to do so, supervision will need to be executed in a very careful manner, not to deter from the attractiveness of anonymity of the currency. 

    Protect Yourself Anyway You Can

    The shortage of legislative control of the cryptocurrency market, mixed with the quick rise in ransomware attacks, indicates that individuals need to take it upon themselves to protect their data. Some organizations have taken extraordinary approaches such as hoarding Bitcoin in case they need to pay a ransom as part of a future attack. 

    For the common man, protecting against ransomware attacks means covering your bases. You should double check that all of your cyber security software is up to date, subscribe to a secure cloud storage provider and backup your data regularly. Companies of all sizes should implement the 3-2-1 data backup strategy in the case of a ransomware attack. The 3-2-1 backup plan states that one should have at least three different copies of data, stored on at least 2 different types of media, with at least one copy offsite. It helps to also have a separate copy of your data stored via the air-gap method, preventing it from ever being stolen.

    Learn More About Getting Your 3-2-1 Backup Plan in Place

    TapeChat with Pat

    At DTC, we value great relationships. Luckily for us, we have some of the best industry contacts out there when it comes to tape media storage & backup. Patrick Mayock, a Partner Development Manager at Hewlett Packard Enterprise (HPE) is one of those individuals. Pat has been with HPE for the last 7 years and prior to that has been in the data backup / storage industry for the last 30 years. Pat is our go to guy at HPE, a true source of support, and overall great colleague. For our TapeChat series Pat was our top choice. Pat’s resume is an extensive one that would impress anyone who see’s it. Pat started his data / media storage journey back in the early 90’s in the bay area. Fast forward to today Pat can be found in the greater Denver area with the great minds over at HPE. Pat knows his stuff so sit back and enjoy this little Q&A we setup for you guys. We hope you enjoy and without further adieu, we welcome you to our series, TapeChat (with Pat)!

    Pat, thank you for taking the time to join us digitally for this online Q&A. We would like to start off by stating how thrilled we are to have you with us. You’re an industry veteran and we’re honored to have you involved in our online content.

    Thanks for the invite.  I enjoy working with your crew and am always impressed by your innovative strategies to reach out to new prospects and educate existing customers on the growing role of LTO tape from SMB to the Data Center. 

    Let’s jump right into it! For the sake of starting things out on a fun note, what is the craziest story or experience you have had or know of involving the LTO / Tape industry? Maybe a fun fact that most are unaware of, or something you would typically tell friends and family… Anything that stands out…

    I’ve worked with a few tape library companies over the years and before that I sold the original 9 track ½ inch tape drives.  Those were monsters, but you would laugh how little data they stored on a reel of tape. One of the most memorable projects I worked on was in the Bay Area, at Oracle headquarters.  They had the idea to migrate from reel to reel tape drives with a plan to replace them with compact, rack mounted, ‘robotic’ tape libraries.  At the end, they replaced those library type shelves, storing hundreds of reels of tape with 32 tape libraries in their computer cabinets.  Each tape library had room for 40 tape slots and four 5 ¼ full high tape drives.  The contrast was impressive.  To restore data, they went from IT staffers physically moving tape media, in ‘sneaker mode’ to having software locate where the data was stored, grab and load the tape automatically in the tape library and start reading data.   Ok, maybe too much of a tape story, but as a young sales rep at the time it was one that I’ll never forget. 

    With someone like yourself who has been doing this for such a long time, what industry advancements and releases still get you excited to this day? What is Pat looking forward to right now in the LTO Tape world?

    I’m lucky.  We used to have five or more tape technologies all fighting for their place in the data protection equation, each from a different vendor. Now, Ultrium LTO tape has a majority of the market and is supported by a coalition of multiple technology vendors working together to advance the design. Some work in the physical tape media, some on the read/write heads, and some on the tape drive itself.  The business has become more predictable and more reliable.  About every two years the consortium releases the next level of LTO tape technology.  We will see LTO-9 technology begin public announcements by the end of 2020. And the thirst for higher storage capacity and higher performance in the same physical space, this is what keeps me more than optimistic about the future.

    When our sales team is making calls and asks a business if they are still backing up to LTO Tape, that question is always met with such an unappreciated / outdated response, in some cases we receive a response of laughter with something along the lines of “people still use tape” as a response. Why do you think LTO as a backup option is getting this type of response? What is it specifically about the technology that makes businesses feel as if LTO Tape is a way of the past…

    As a Tape Guy, I hear that question a lot.  The reality in the market is that some industries are generating so much data that they have to increase their dependence on tape based solutions as part of their storage hierarchy. It starts with just the cost comparison of data on a single disk drive versus that same amount of data on a LTO tape cartridge. LTO tape wins. But the real impact is some much bigger than just that.  Think about the really large data center facilities.  The bigger considerations are for instance, for a given amount of data (a lot) what solution can fit the most data in to a cabinet size solution.  Physical floor space in the data center is at a premium.  Tape wins. Then consider the cost of having that data accessible.  A rack of disk drives consume tons more energy that a tape library. Tape wins again. Then consider the cooling cost that go along with all those disk drives spinning platters.  Tape wins, creating a greener solution that is more cost effective. At HPE and available from DTC, we have white papers and presentations on just this topic of cost savings.   In summary, if a company is not looking at or using LTO tape, then their data retention, data protection and data archiving needs are just not yet at the breaking point. 

    There seems to be an emergence of the Disk / Hard Drive backup option being utilized by so many businesses. Do you feel like LTO Tape will ever be looked at with the same level of respect or appreciation by those same businesses?

    If you are talking about solid state disk for high access, and dedicated disk drive solutions for backup – sure that works.  But at some point you need multiple copies at multiple locations to protect your investment.  The downside of most disk only solutions is that all the data is accessible across the network.  Now days, Ransomware and CyberSecurity are part of the biggest threats to corporations, government agencies and even mom and pop SMBs.  The unique advantage of adding LTO tape based tape libraries is that the data is NOT easily tapped into because the physical media in not in the tape drive.  Again, HPE has very detailed white papers and presentations on this Air Gap principle, all available from DTC. 

    LTO Tape vs Hard Drive seems to be the big two in terms of the data / backup realm, as an insider to this topic, where do you see this battle going in the far future?

    It’s less of a battle and more of a plan to ‘divide the work load and let’s work together’.  In most environments, tape and disk work side by side with applications selecting where the data is kept. However, there are physical limitations on how much space is available on a spinning platter or set of platters, and this will dramatically slow down the growth of their capacity within a given form factor. With LTO tape technology, the physical areal footprint is so much bigger, because of the thousands of feet of tape within each tape cartridge. At LTO-8 we have 960 meters of tape to write on. Even at a half inch wide, that’s a lot of space for data. Both disk and tape technologies will improve how much data they can fit on their media, (areal density) but LTO tape just has the advantage of so much space to work with. LTO tape will continue to follow the future roadmap which is already spec’d out to LTO-12.  

    With so many years in this industry, what has been the highlight of your career?

    The technology has always impressed me, learning and talking about the details of a particular technical design advantage. Then, being able to work with a wide range of IT specialists and learning about their business and what they actually do with the data.  But when I look back, on the biggest highlights,  I remember all the great people that I have worked with side by side to solve customer’s storage and data protection problems.  Sometimes we won, sometimes we didn’t.  I will never forget working to do our best for the deal. 

    What tech advancements do you hope to see rolled out that would be a game changer for data storage as a whole?

    The data storage evolution is driven by the creation of more data, every day.  When one technology fails to keep pace with the growth, another one steps up to the challenge.  Like I have said, LTO tape has a pretty solid path forward for easily 6 more years of breakthrough advancements. In 6 years, I’m sure there will be some new technology working to knock out LTO, some new technology that today is just an idea. 

    We see more and more companies getting hit every day with ransomware / data theft due to hackers, what are your thoughts on this and where do you see things going with this. Will we ever reach a point where this will start to level off or become less common?

    Ransomware and cyber security are the hot topics keeping IT Directors and business owners up at night. It is a criminal activity that is highly lucrative. Criminals will continue to attempt to steal data, block access and hold companies for ransom wherever they can.  But they prefer easy targets. As I mentioned earlier, Tape Solutions offer one key advantage in this battle: if the data isn’t live on the network, the hacker has to work harder. This is a critical step to protect your data. 

    For more information on Pat, data backup / storage, + more follow Pat on Twitter:

    The TikTok Controversy: How Much Does Big Tech Care About Your Data and its Privacy?

    If you have a teenager in your house, you’ve probably encountered them making weird dance videos in front of their phone’s camera. Welcome to the TikTok movement that’s taking over our nation’s youth. TikTok is a popular social media video sharing app that continues to make headlines due to cybersecurity concerns. Recently, the U.S. military banned its use on government phones following a warning from the DoD about potential personal information risk. TikTok has now verified that it patched multiple vulnerabilities that exposed user data. In order to better understand TikTok’s true impact on data and data privacy, we’ve compiled some of the details regarding the information TikTok gathers, sends, and stores.

    What is TikTok?

    TikTok is a video sharing application thar allows users to create short, fifteen-second videos on their phones and post the content to a public platform. Videos can be enriched with music and visual elements, such as filters and stickers. By having a young adolescent demographic, along with the content that is created and shared on the platform, have put the app’s privacy features in the limelight as of late. Even more so, questions the location of TikTok data storage and access have raised red flags.

    You can review TikTok’s privacy statement for yourself here.

    TikTok Security Concerns

    Even though TikTok allows users to control who can see their content, the app does ask for a number of consents on your device. Most noteworthy, it accesses your location and device information. However, there’s no evidence to support the theory of malicious activity or that TikTok is violating their privacy policy, it is still advised to practice caution with the content that’s both created and posted.

    The biggest concern surrounding the TikTok aplication is where user information is stored and who has access to it. According the TikTok website, “We store all US user data in the United States, with backup redundancy in Singapore. Our data centers are located entirely outside of China, and none of our data is subject to Chinese law.” “The personal data that we collect from you will be transferred to, and stored at, a destination outside of the European Economic Area (“EEA”).” There is no other specific information regarding where user data is stored.

    Recently, TikTok published a Transparency Report which lists “legal requests for user information”, “government requests for content removal”, and “copyrighted content take-down notices”. The “Legal Requests for User Information” shows that India, the United States, and Japan are the top three countries where user information was requested. The United States was the number one country with fulfilled request (86%) and number of accounts specified in the requests (255). Oddly enough, China is not listed as having received any requests for user information. 

    What Kind of Data is TikTok Tracking?

    Below are some of the consents TikTok requires on Android and iOS devices after installation of the app is completed. While some of the permissions are to be expected, these are all consistent with TikTok’s written privacy policy. When viewing all that TikTok gathers from its users, it can be alarming. In short, the app allows TikTok to:

    • Access the camera (and take pictures/video), the microphone (and record sound), the device’s WIFI connection, and the full list of contacts on your device.
    • Determine if the internet is available and access it if it is.
    • Keep the device turned on and automatically start itself.
    • Secure detailed information on the user’s location using GPS.
    • Read and write to the device’s storage, install/remove shortcuts, and access the flashlight (turn it off and on).

    You read that right, TikTok has full access to your audio, video, and list of contacts in your phone. The geo location tracking via GPS is somewhat surprising though, especially since TikTok videos don’t display location information. So why collect that information? If you operate and Android device, TikTok has the capability of accessing other apps running at the same time, which can give the app access to data in another app such as a banking or password storage app. 

    Why is TikTok Banned by the US Military?

    In December 2019, the US military started instructing soldiers to stop using TikToK on all government-owned phones. This TikTok policy reversal came just shortly after the release of a Dec. 16 Defense Department Cyber Awareness Message classifying TikTok as having potential security risks associated with its use. As the US military cannot prevent government personnel from accessing TiKTok on their personal phones, the leaders recommended that service members use caution if unfamiliar text messages are received.

    In fact, this was not the first time that the Defense Department had been required to encourage service members to remove a popular app from their phones. In 2016, the Defense Department banned the augmented-reality game, Pokémon Go, from US military owned smartphones. However, this case was a bit different as military officials alluded to concerns over productivity and the potential distractions it could cause. The concerns over TikTok are focused on cybersecurity and spying by the Chinese government.

    In the past, the DoD has put out more general social media guidelines, advising personnel to proceed with caution when using any social platform. And all DoD personnel are required to take annual cyber awareness training that covers the threats that social media can pose.

    3-2-1 Backup Rule

    What is the 3-2-1 Backup Rule?

     

    The 3-2-1 backup rule is a concept made famous by photographer Peter Krogh. He basically said there are two types of people: those who have already had a storage failure and those who will have one in the future. Its inevitable. The 3-2-1 backup rule helps to answer two important questions: how many backup files should I have and where should I store them?

    The 3-2-1 backup rule goes as follows:

    • Have at least three copies of your data.
    • Store the copies on two different media.
    • Keep one backup copy offsite.

    Need help with building your data backup strategy?

    1. Create at least THREE different copies of your data

    Yes, I said three copies. That means that in addition to your primary data, you should also have at least two more backups that you can rely on if needed. But why isn’t one backup sufficient you ask? Think about keeping your original data on storage device A and its backup is on storage device B. Both storage devices have the same characteristics, and they have no common failure causes. If device A has a probability of failure that’s 1/100 (and the same is true for device B), then the probability of failure of both devices at the same time is 1/10,000.

    So with THREE copies of data, if you have your primary data (device A) and two backups of it (device B and device C), and if all devices have the same characteristics and no common failure causes, then the probability of failure of all three devices at the same time will be 1/1,000,000 chance of losing all of your data. That’s much better than having only one copy and a 1/100 chance of losing it all, wouldn’t you say? Creating more than two copies of data also avoids a situation where the primary copy and its backup copy are stored in the same physical location, in the event of a natural disaster.

    1. Store your data on at least TWO different types of media

    Now in the last scenario above we assumed that there were no common failure causes for all of the devices that contain your precious data. Clearly, this requirement is much harder to fulfill if your primary data and its backup are located in the same place. Disks from the same RAID aren’t typically independent. Even more so, it is not uncommon to experience failure of one or more disks from the same storage compartment around the same time.

    This is where the #2 comes in 3-2-1 rule. It is recommended that you keep copies of your data on at least TWO different storage types. For example, internal hard disk drives AND removable storage media such as tapes, external hard drives, USB drives, od SD-cards. It is even possible to keep data on two internal hard disk drives in different storage locations.

     

    Learn more about purchasing tape media to expand your data storage strategy 

    1. Store at least ONE of these copies offsite

    Believe it or not, physical separation between data copies is crucial. It’s bad idea to keep your external storage device in the same room as your primary storage device. Just ask the numerous companies that are located in the path of a tornado or in a flood zone. Or what would you do if your business caught fire? If you work for a smaller company with only one location, storing your backups to the cloud would be a smart alternative. Tapes that are stored at an offsite location are also popular among companies of all sizes.

     

    Every system administrator should have a backup. This principle works for any virtual environment; regardless of the system you are running, backup is king!

    Scroll to top