Author: DTC Marketing

    NCSAM WEEK 4 ; The Future of Internet Connected Devices

    A decade ago, the average household would not be able to answer their front door from miles away via a smartphone, or order dinner by simply speaking to a small box. These things may have been customary in Hollywood spy films, but now they can be found in nearly every home across America. These internet connected devices are what is known as the Internet of Things.

     

    The internet world is flourishing. It’s not just about computers, laptops, tablets, and smartphones anymore. There are now thousands of devices that are internet-connected. The list of devices has grown to washing machines, robotic vacuum cleaners, door locks, toys, and toasters. Because all of these devices are connected to one another through the internet, we must be more aware of these devices and their settings to protect our data and our privacy.

    New Internet-connected devices provide a never before seen level of convenience in our lives, but they also require that we share more information than ever. The cars we drive, appliances we use to cook, our watches we use to tell time, the lighting in our homes, and even our home security systems, all contain sensing devices that can talk to another machine and trigger other actions. We have devices that direct that control the amount of energy we use in our homes and the energy in our bodies by tracking eating, sleeping, and exercise habits.

    The security of the information users share with these devices is not always guaranteed. Once the device itself connects to the Internet, it is vulnerable to all sorts of risks. It is important than ever that we secure our devices, with more entering our homes and workplaces each day.

    Upgrading your organizations network devices is easier than ever with DTC

    Future Predictions about Internet Connected Devices

     

    There will be more than to 21 billion IoT devices by 2025.

    In 2016, there were more than 4.7 billion devices connected to the internet, and by 2021 it is expected to increase to nearly 11.6 billion devices.

    There will be more “smart” cities.

    Household consumers aren’t the only ones that use the power of internet connected devices. Cities and companies are also adopting smart technologies to save both time and money. Cities are able to automate, remotely manage, and collect data through things like visitor kiosks, video camera surveillance systems, bike rental stations, and taxis.

    See how some cities are using AI to help crisis management

    Artificial intelligence (AI) will keep growing

    Smart home hubs, thermostats, lighting systems, and even TVs collect data on your habits and patterns of usage. When users set up voice-controlled devices, the allow them to record what is said and store the recordings in the cloud. The data is collected in the creation of what is known as machine learning. Machine learning is a type of artificial intelligence that helps computers “learn” without someone having to program them. 

    Network routers become more secure and smarter

    Most internet connected devices exist in the home and don’t have security software installed, leaving them vulnerable to attacks. As manufacturers rush to get their products to market in a rapid manner, security becomes an afterthought. 

    The router is the entry point of the internet and gate keeper into your home, giving it the ability to provide protection to all of the connected devices. A conventional router provides some security, like password protection, firewalls, and the ability to allow only certain devices on your network. In the future, router manufacturers will continue to find new ways to increase security.

    5G Networks Will Drive IoT Growth

    Wireless carriers will continue to implement 5G (fifth generation) networks, promising increased speed and the ability connect more smart devices at the same time. Faster network speeds translate into increased data collected by your smart devices to be analyzed and managed, driving innovation and growth. 

    Cars Will Continue to Get Smarter

    The emergence of 5G will impact the auto industry like never before. The development of driverless cars and internet connected vehicles will advance from data moving faster. New cars will increasingly analyze your data and connect with other IoT devices, including other high-tech vehicles on the road.

    5G Connected Devices Will Open the Door to New Security Concerns

    Eventually, 5G internet connected devices will connect directly to the 5G network than via a Wi-Fi router, making those devices more vulnerable to direct attack. Devices will be more difficult for in-home users to secure when they bypass a central router.

     

    For more information on CyberSecurity & how to be #CyberSmart, visit the CISA website today:

    Click Here: https://www.cisa.gov/national-cyber-security-awareness-month

    Securing Internet-Connected Devices in Healthcare

    Now more than ever, the healthcare industry is depending on internet-connected devices to improve patient care, organizational productivity, response time, and patient confidentiality. With the recent COVID-19 outbreak, the development of telemedicine and patient portal apps has come to the forefront in the industry. Along with digital health records and internet-connected medical devices, the healthcare industry has also never been more vulnerable to a cyber-attack.

    As the global epidemic spread across the nation, doctors, dentists, and other medical professionals such as therapists were forced to rely on online visits with their patients. The increase in virtual appointments also brings new concerns of patient confidentiality. Patients want to know how safe is the information shared during these online visits. Are cybercriminals able to steal their personal information? Unfortunate, the answer is yes. The healthcare industry is vulnerable just as is any other industry. However, there are steps healthcare providers can take to protect patient privacy during virtual visits.

    Read more about how we help the healthcare industry with their IT needs.

    What are the privacy risks associated with internet connected healthcare?

    With virtual visits becoming more common place, cyber criminals are licking their chops. Hackers look to take advantage of these opportunities by stealing the private medical and billing information of patients. Cybercriminals could try intercepting emails or video chats with information about preexisting conditions or personal problems you may be having. Once the information is obtained, they could potentially sell it on the dark web, use it for blackmail, or sell it to drug manufacturers who overload customers with advertisements.

    Healthcare records are particularly valuable on black markets due to the information they contain can be used to steal your identity. The information they hold might consist of your birth date, Social Security number, medical conditions, height, weight, home address, and even a picture of you. Hackers can use this information to take out credit cards or loans in your name. 

    Providers may give their patients the option of ending their virtual visit by receiving health records through email or the medical provider’s online portal. Hackers may be able to steal the contents of your email messages or track the keystrokes you use to log onto your medical provider’s online portal. Just as medical providers are required to protect user information, so are all business entities. 

    Learn more about how we can help your business stay compliant.

    5 Ways to Secure Your Healthcare Connected Devices

    1. Control everything that connects into your network.  Managing network segmentation can help with risk mitigation and controlling a breach if one does occur. Network visibility is critical. And, in so many cases, the network acts as your key security mechanism to stop the spread of an attack. Network intelligence, scanners, and security solutions can all help reduce the risk of an attack or breach. 
    2. Create security based on context and layers. Your security platform must work for you and question devices coming in to really understand where they’re coming from. When it comes to IoT and connected devices, contextual security can help isolate IoT solutions to their own network. Set up policies to monitor anomalous behavior and even traffic patterns. Set up additional filters for extra security; like shutting the network segment down if there’s a sudden rise in traffic. 
    3. Centralize and segment connected devices. If you’re going to work with IoT and connected devices, create a separate network, monitor those devices properly, and set monitors to make sure you can manage all these connected tools and use IoT aggregation hubs that help further the control of devices. 
    4. Align users and the business when it comes to more connected devices in healthcare. Ensure there is complete alignment between business and IT leadership units. This is the best way to gain the most value out of these devices and ensure you don’t fall into an IoT device hole.
    5. Always test your systems and maintain visibility.  Never lose sight of your devices and build a good monitoring platform. The more things that connect into the network the harder it will be to monitor them all.

    A plan for guarding against ransomware in the healthcare industry

    So, what can hospitals, medical centers, dentists, and other healthcare providers do to guard against the threat of cyber-attack?  Here is a simple five-point plan that will go a long way to helping healthcare professionals secure their defenses.

    Stay up to date

    Make sure that servers and PCs are up to date with the latest operating systems and antivirus solutions.

     

    Retire unused IT assets

    Consider if older machines, which are beyond updates or support, could be replaced or retired. The cost of doing so, and inconvenience of replacing older equipment will probably be less than the impact of a data breach.

     

    Sell Your Retired IT Assets for Cash

     

    Educate employees

    Make sure everyone in the organization is familiar with ransomware methods and can recognize attempts to gain password credentials or circulate harmful links and attachments. Hospitals employ so many different and diverse professionals, covering a multitude of functions, that there needs to be a culture of vigilance across the entire organization.

     

    Be prepared for an attack

    Use different credentials for accessing backup storage and maybe even a mixture of file systems to isolate different parts of your infrastructure to slow the spread of ransomware. Healthcare organizations that follow the “1-10-60” rule of cybersecurity will be better placed to neutralize the threat of a hostile adversary before it can leave its initial entry point. The most cyber-prepared healthcare agencies should aim to detect an intrusion in under a minute, perform a full investigation in under 10 minutes, and eradicate the adversary from the environment in under an hour.

     

    Create an Airgap

    Three copies of your data, on at least two different media, with one stored offsite (e.g. cloud or tape) and one stored offline (e.g. tape). Having your data behind a physical air gap creates perhaps the most formidable barrier against ransomware. Tape can greatly speed up your recovery in the hours and days that follow an attack, especially if your primary backups have been disrupted. Tape is also supremely efficient for storing huge amounts of infrequently accessed medical records for a very long time. Tapes can also be encrypted so that even if they did fall into the wrong hands, it would be impossible for thieves to access or use the data.

     

    Learn more about how to create an Airgap

    NCSAM Week 2 ; Securing Devices at Home and Work

    Securing Devices at Home and Work

     

    According to a 2018 study by CNBC, there were over 70% of employees around the world working remotely at least one day per week. With the recent COVID-19 pandemic, many organizations have had to make full-time remote work an option just to stay in business. As full-time remote workers are progressively more common, there still aren’t many resources that focus on the cybersecurity risk created by working remotely.

    With the latest surge in working from home (WFH) employees, businesses are forced to rely on business continuity planning. This means that organizations must find ways to protect their customer’s sensitive data simultaneously granting workplace flexibility. Provided the current conditions we are all facing and in celebration of Cyber Security Awareness Month (CSAM), we thought we should share a few tips to help your business increase its cybersecurity.

    Security tips for the home, office and working from a home office

    Secure your working area

    The first and easiest piece of security advice would be to physically secure your workspace. Working remotely should be treated the same as working in the office, o you need to lock up when you leave. There have been way too many instances when laptops with sensitive data on them have been stolen from living rooms, home offices, and even in public settings such as coffee shops. Never leave your devices unattended and lock doors when you leave.

    See why laptop and home office security is so important. 

    Secure your router

    Cybercriminals take advantage of default passwords on home routers because it is not often changed, leaving any home network vulnerable. Change the router’s password from the default to something unique. You can also make sure firmware updates are installed so known vulnerabilities aren’t exploitable. 

    Use separate devices for work and personal

    It’s important to set separate restrictions between your work devices and home devices. At first it may seem like an unnecessary burden to constantly switch between devices throughout the day, but you never know if one has been compromised. Doing the same for your mobile devices, can decrease the amount of sensitive data exposed if your personal device or work device has been attacked.

    Encrypt the device you are using

    Encryption is the process of encoding information so only authorized parties can access it. If your organization hasn’t already encrypted its devices, it should. Encrypting the devices prevents strangers from accessing the contents of your device without the password, PIN, or biometrics. 

    Below is a way to encrypt devices with the following operating systems:

    • Windows: Turn on BitLocker.
    • macOS: Turn on FileVault.
    • Linux: Use dm-crypt or similar.
    • Android: Enabled by default since Android 6.
    • iOS: Enabled by default since iOS 8.

    Check that your operating system is supported and up to date.

    Usually, operating system developers only support the last few major versions, as supporting all versions is costly and the majority of users upgrade when told to do so. Unsupported operating systems no longer receive security patches, making your device and sensitive data at risk. If your device does not support the latest operating system, it may be time to look into updating the device.

    Here’s how to check if your operating system is still supported:

    • Windows: Check the Windows lifecycle fact sheet
    • macOS: Apple has no official policy for macOS. That said, Apple consistently supports the last three versions of macOS. So assuming Apple releases a new version of macOS each year, each release of macOS should be supported for roughly three years.
    • Linux: Most active distributions are well supported.
    • Android: Security updates target the current and last two major versions, but you may need to check that your manufacturer/carrier is sending the security patches to your device. 
    • iOS: Like macOS, Apple has no official policy for iOS but security updates generally target the most recent major version and the three prior. 

    Read more about Android security here

    Create a strong PIN/password only YOU know

    Everything mentioned prior to this won’t matter if you don’t use a strong password. A common tip for creating a strong password is to avoid using repeating numbers (000000), sequences (123456), or common passwords such as the word password itself.

    More tips on creating a strong password include:

    • Avoid using anything that is related to you
    • Avoid using your date of birth
    • Avoid using your license plate
    • Avoid using your home address
    • Avoid using any family members or pets’ names.

     

     A good pin/password should appear arbitrary to everyone except you. Consider investing in a password manager. A good password manager can help you create strong passwords and remember them, as well as share them with family members, employees, or friends securely. 

    Learn more about how to create a strong password

     Install antivirus software

    An antivirus software is a program that detects or recognizes a harmful computer virus and works on removing it from the computer system. Antivirus software operates as a preventive system so that it not only removes a virus but also counteracts any potential virus from infecting the device in the future.

    Authorize two-factor authentication

    Two-factor authentication is an authentication method where access is granted only after successfully presenting two pieces of evidence to an authentication mechanism.  This method has been proven to reduce the risk of successful phishing emails and malware infections. Even if the cybercriminal is able to get your password, they are unable to login because they do not have the second piece of evidence.

    The first and most common evidence is a password. The second takes many forms but is typically a one-time code or push notification. There are several applications that can be used for two factor authentication such as Google Authenticator. 

    Erase data from any devices you plan to sell

    This should be the number one rule on any cybersecurity list. It is only a matter of time until your devices are obsolete, and it is time to upgrade. The one thing you don’t want is to have a data leak because you failed to properly erase the data from your device before selling or disposing of it. Returning the device to factory setting may not always be enough, as some hackers know how to retrieve the data that has been “erased”. Before doing anything, always remember to back up your data to multiple devices before clicking that “delete” button. 

    Consult with your operating system to see how to properly reset your device to factory settings. If you are certain you do not want the data on your device to be accessed ever again, we can help with that. Here is a list of data destruction services we provide:

    Security tips for employers handling a remote workforce

    Train employees on cybersecurity awareness

    As cybercriminals are always looking for new ways to bypass security controls to gain access to sensitive information, cybersecurity isn’t something that can just be taught once. It must be a continual learning and retention. Here are a few things that a business can teach their staff in order to help thwart a cyberattack:

    • Avoid malicious email attachments and other email-based scams
    • Identify domain hijacking
    • Use operations security on their social media accounts and public profiles 
    • Only install software if they need to 
    • Avoid installing browser plugins that come from unknown or unidentified developers

    Use a virtual private network (VPN)

    A virtual private network (VPN) extends a private network across a public network, enabling you to send and receive data across shared or public networks as if you are directly connected to the private network. They do this by establishing a secure and encrypted connection to the network over the internet and routing your traffic through that. This keeps you secure on public hotspots and allows for remote access to secure computing assets. 

    Microsoft’s Project Natick: The Underwater Data Center of the Future

    When you think of underwater, deep-sea adventures, what is something that comes to mind? Colorful plants, odd looking sea creatures, and maybe even a shipwreck or two; but what about a data center? Moving forward, under-water datacenters may become the norm, and not so much an anomaly. Back in 2018, Microsoft sunk an entire data center to the bottom of the Scottish sea, plummeting 864 servers and 27.6 petabytes of storage. After two years of sitting 117 feet deep in the ocean, Microsoft’s Project Natick as it’s known, has been brought to the surface and deemed a success.

    What is Project Natick?

     

    Microsoft’s Project Natick was thought up back in 2015 when the idea of submerged servers could have a significant impact on lowering energy usage. When the original hypothesis came to light, Microsoft it immersed a data center off the coast of California for several months as a proof of concept to see if the computers would even endure the underwater journey. Ultimately, the experiment was envisioned to show that portable, flexible data center placements in coastal areas around the world could prove to scale up data center needs while keeping energy and operation costs low. Doing this would allow companies to utilize smaller data centers closer to where customers need them, instead of routing everything to centralized hubs. Next, the company will look into the possibilities of increasing the size and performance of these data centers by connecting more than one together to merge their resources.

    What We Learned from Microsoft’s Undersea Experiment

    After two years of being submerged, the results of the experiment not only showed that using offshore underwater data centers appears to work well in regards to overall performance, but also discovered that the servers contained within the data center proved to be up to eight times more reliable than their above ground equivalents. The team of researchers plan to further examine this phenomenon and exactly what was responsible for this greater reliability rate. For now, steady temperatures, no oxygen corrosion, and a lack of humans bumping into the computers is thought to be the reason. Hopefully, this same outcome can be transposed to land-based server farms for increased performance and efficiency across the board.

    Additional developments consisted of being able to operate with more power efficiency, especially in regions where the grid on land is not considered reliable enough for sustained operation. It also will take lessons on renewability from the project’s successful deployment, with Natick relying on wind, solar, and experimental tidal technologies. As for future underwater servers, Microsoft acknowledged that the project is still in the infant stages. However, if it were to build a data center with the same capabilities as a standard Microsoft Azure it would require multiple vessels.

    Do your data centers need servicing?

    The Benefits of Submersible Data Centers

     

    The benefits of using a natural cooling agent instead of energy to cool a data center is an obvious positive outcome from the experiment. When Microsoft hauled its underwater data center up from the bottom of the North Sea and conducted some analysis, researchers also found the servers were eight time more reliable than those on land.

    The shipping container sized pod that was recently pulled from 117 feet below the North Sea off Scotland’s Orkney Islands was deployed in June 2018. Throughout the last two years, researchers observed the performance of 864 standard Microsoft data center servers installed on 12 racks inside the pod. During the experiment they also learned more about the economics of modular undersea data centers, which have the ability to be quickly set up offshore nearby population centers and need less resources for efficient operations and cooling. 

    Natick researchers assume that the servers benefited from the pod’s nitrogen atmosphere, being less corrosive than oxygen. The non-existence of human interaction to disrupt components also likely added to increased reliability.

    The North Sea-based project also exhibited the possibility of leveraging green technologies for data center operations. The data center was connected to the local electric grid, which is 100% supplied by wind, solar and experimental energy technologies. In the future, Microsoft plans to explore eliminating the grid connection altogether by co-locating a data center with an ocean-based green power system, such as offshore wind or tidal turbines.

    Celebrating National Cyber Security Awareness Month

    Celebrating National Cyber Security Awareness Month

     

    Every October since 2004, National Cyber Security Awareness Month (NCSAM) is observed in the United States. Started by the National Cyber Security Division within the Department of Homeland Security and the nonprofit National Cyber Security Alliance, the NCSAM aims to spread awareness about the importance of cybersecurity. The National Cyber Security Alliance launched NCSAM as a large effort to improve online safety and security. Since 2009, the month has included an overall theme, for 2020 we celebrate “Do Your Part, #BeCyberSmart”. Weekly themes throughout the month were introduced in 2011. This year, our weekly themes will be as follows:

    • Week of October 5 (Week 1): If You Connect It, Protect It
    • Week of October 12 (Week 2): Securing Devices at Home and Work
    • Week of October 19 (Week 3): Securing Internet-Connected Devices in Healthcare
    • Week of October 26 (Week 4): The Future of Connected Devices

    If You Connect IT. Protect IT.

     

    October 1, 2020, marked the 17th annual National Cybersecurity Awareness Month (NCSAM), reminding everyone of the role we all play in online safety and security at home and in the workplace. Brought forth by both the Cybersecurity and Infrastructure Security Agency (CISA) and the National Cyber Security Alliance (NCSA), NCSAM is a joint effort between government and industry to make sure every American has the resources they need to stay safe and secure online. 

    To kick off National Cyber Security Awareness Month, here are some tips to stay say online:

    Enable multi-factor authentication (MFA). This ensures that the only person who has access to your account is you. Use MFA for email, banking, social media and any other service that requires logging in.

    Use the longest password allowed. Get creative and customize your standard password for different sites, which can prevent cybercriminals from gaining access to these accounts and protect you in the event of a breach. Use password managers to generate and remember different, complex passphrase for each of your accounts.

    Protect what you connect. Whether it’s your computer, smartphone, game device or other network devices, the best defense against viruses and malware is to update to the latest security software, web browser and operating systems. 

    Limit what information you post on social media.  Cyber criminals look for everything, from personal addresses to your pet’s names. What many people don’t realize is that these seemingly random details are all cybercriminals need to know to target you, your loved ones, and your physical belongings. Keep Social Security numbers, account numbers and passphrases private, as well as specific information about yourself, such as your full name, address, birthday and even vacation plans. Disable location services that allow anyone to see where you are.

    Stay protected on public networks. Before you connect to any public Wi-Fi be sure to confirm the name of the network and exact login procedures with appropriate staff to ensure that the network is legitimate. Your personal hotspot is a safer alternative to free Wi-Fi. Also, only use sites that begin with “https://” when shopping or banking online.

    Introducing CISA, the Federal Governments Protection Against Cyber-Attacks

     

    On November 16, 2018, the United States Congress formed the Cybersecurity and Infrastructure Security Agency (CISA) to detect threats, quickly communicate the information and aid in defense of the nation’s critical infrastructure. The new federal agency was created through the Cybersecurity and Infrastructure Security Agency Act of 2018, which was signed into law by President Donald Trump. That legislature made the National Protection and Programs Directorate (NPPD) of the Department of Homeland Security’s (DHS) the new Cybersecurity and Infrastructure Security Agency, reassigning all resources and responsibilities within. Before the bill was passed, the NPPD handled all of DHS’s cybersecurity-related affairs.

     

    Why the CISA was Formed

    In April 2015, IT workers at the United States Office of Personnel Management (OPM), the agency that manages the government’s civilian workforce, discovered that some of its personnel files had been hacked. Sensitive personal data on 22 million current and former federal employees was stolen by suspected Chinese hackers. Among the sensitive data that was stolen, were millions of SF-86 forms, which contain extremely personal information collected in background checks for people requesting government security clearances, along with records of millions of people’s fingerprints. 

    In the wake of the massive data breach, it became even more evident that the Department of Homeland Security was not effectively positioned to respond to the growing threat of cyber-attacks, both foreign and domestic.  As more foreign invasions into U.S. IT infrastructure and other forms of cybersecurity attacks increased, industry experts demanded the creation of a new agency that would be more aligned to handle the issue of cyber security.

    DHS’s cybersecurity strategy, made public in May 2018, offered a strategic framework to carry out the government’s cybersecurity responsibilities during the following five years. The strategy highlighted a unified approach to managing risk and lending greater authority to the creation of a separate cybersecurity agency. Besides the need for a new approach to the nation’s cybersecurity threats, CISA was created to solve what security professionals and government officials frequently referred to as a “branding” problem DHS faced with NPPD. CISA would be a clear and focused federal agency.

    Learn more about the 2015 OPM Attack

    What Does CISA Do?

     

    In a nutshell, CISA is in charge of protecting the nation’s critical infrastructure from physical and cyber-attacks. The agency’s mission is to build the national capacity to defend against cyber-attacks and to work with the federal government to provide cybersecurity tools, incident response services and assessment capabilities to safeguard the .gov networks that support the essential operations of partner departments and agencies. Below is a list of other responsibilities the CISA has undertaken as a newly formed federal agency:

    • Coordinate security and resilience efforts using trusted partnerships across the private and public sector
    • Deliver technical assistance and assessments to federal stakeholders as well as to infrastructure owners and operators nationwide
    • Enhance public safety interoperable communications at all levels of government 
    • Help partners across the country develop their emergency communications capabilities
    • Conducts extensive, nationwide outreach to support and promote the ability of emergency response providers and relevant government officials to continue to communicate in the event of a natural disaster, act of terrorism, or other man-made disaster

    Visit the CISA official government page

    Who Leads the CISA?

     

    The CISA is made up of two core operations that are vital to the agency’s success. First, is the National Cybersecurity and Communications Integration Center (NCCIC), which delivers 24×7 cyber-situational awareness, analysis, incident response and cyber-defense capabilities to the federal government. The NCCIC operates on state, local, tribal, and territorial government levels; within the private sector; and with international partners. The second is the National Risk Management Center (NRMC), which is a planning, analysis and collaboration center working to identify and address the most significant risks to the nation’s critical infrastructure.

    The CISA is led by a team of eight highly respected and experienced team of individuals.

    • Director, Cybersecurity, and Infrastructure Security Agency (CISA), Christopher C. Krebs 
    • Deputy Director, Matthew Travis 
    • Assistant Director for Cybersecurity, Bryan Ware 
    • Assistant Director (Acting) for Infrastructure Security, Steve Harris
    • Assistant Director, National Risk Management Center, Bob Kolasky 
    • Assistant Director (Acting) for Emergency Communications, Vincent DeLaurentis 
    • Assistant Director for Integrated Operations, John Felker
    • Assistant Director (Acting) for Stakeholder Engagement, Bradford Willke

    You can learn more about the CISA leadership team and their structure here.

    Cyber Insurance in the Modern World

    Yes, you read that correctly, cyber insurance is a real thing and it does exactly what is says. No, cyber insurance can’t defend your business from a cyber-attack, but it can keep your business afloat with secure financial support should a data security incident happen. Most organizations operate their business and reach out to potential customers via social media and internet-based transactions. Unfortunately, those modes of communication also serve as opportunities to cyber warfare. The odds are not in your favor, as cyberattacks are likely to occur and have the potential to cause serious losses for organizations both large and small. As part of a risk management plan, organizations regularly must decide which risks to avoid, accept, control or transfer. Transferring risk is where cyber insurance will pay massive dividends.

     

    What is Cyber Insurance?

    By definition, a cyber insurance policy, also known as cyber risk insurance (CRI) or cyber liability insurance coverage (CLIC), is meant to help an organization alleviate the risk of a cyber-related security breach by offsetting the costs involved with the recovery. Cyber insurance started making waves in 2005, with the total value of premiums projected to reach $7.5 billion by 2020. According to audit and assurance consultants PwC, about 33% of U.S. companies currently hold a cyber insurance policy. Clearly companies are feeling the need for cyber insurance, but what exactly does it cover? Dependent on the policy, cyber insurance covers expenses related to the policy holder as well as any claims made by third party casualties. 

    Below are some common reimbursable expenses:

    • Forensic Investigation: A forensics investigation is needed to establish what occurred, the best way to repair damage caused and how to prevent a similar security breach from happening again. This may include coordination with law enforcement and the FBI.
    • Any Business Losses Incurred: A typical policy may contain similar items that are covered by an errors & omissions policy, as well as financial losses experienced by network downtime, business disruption, data loss recovery, and reputation repair.
    • Privacy and Notification Services: This involves mandatory data breach notifications to customers and involved parties, and credit monitoring for customers whose information was or may have been violated.
    • Lawsuits and Extortion Coverage: This includes legal expenses related to the release of confidential information and intellectual property, legal settlements, and regulatory fines. This may also include the costs associated from a ransomware extortion.

    Like anything in the IT world, cyber insurance is continuously changing and growing. Cyber risks change often, and organizations have a tendency to avoid reporting the true effect of security breaches in order to prevent negative publicity. Because of this, policy underwriters have limited data on which to define the financial impact of attacks.

    How do cyber insurance underwriters determine your coverage?

     

    As any insurance company does, cyber insurance underwriters want to see that an organization has taken upon itself to assess its weaknesses to cyberattacks. This cyber risk profile should also show how the company and follows best practices by facilitating defenses and controls to protect against potential attacks. Employee education in the form of security awareness, especially for phishing and social engineering, should also be part of the organization’s security protection plan. 

    Cyber-attacks against all enterprises have been increasing over the years. Small businesses tend to take on the mindset that they’re too small to be worth the effort of an attack. Quite the contrary though, as Symantec found that over 30% of phishing attacks in 2015 were launched against businesses with under 250 employees. Symantec’s 2016 Internet Security Threat Report indicated that 43% of all attacks in 2015 were targeted at small businesses.

    You can download the Symantec’s 2016 Internet Security Threat Report here

    The Centre for Strategic and International Studies estimates that the annual costs to the global economy from cybercrime was between $375 billion and $575 billion, with the average cost of a data breach costing larger companies over $3 million per incident. Every organization is different and therefore must decide whether they’re willing to risk that amount of money, or if cyber insurance is necessary to cover the costs for what they potentially could sustain.

    As stated earlier in the article, cyber insurance covers first-party losses and third-party claims, whereas general liability insurance only covers property damage. Sony is a great example of when cyber insurance comes in handy. Sony was caught in the 2011 PlayStation hacker breach, with costs reaching $171M. Those costs could have been offset by cyber insurance had the company made certain that it was covered prior.

    The cost of cyber insurance coverage and premiums are based on an organization’s industry, type of service they provided, they’re probability of data risks and exposures, policies, and annual gross revenue. Every business is very different so it best to consult with your policy provider when seeking more information about cyber-insurance.

    Snowflake IPO

    On September 16, 2020, history was made on the New York Stock Exchange. A software company named Snowflake (ticker: SNOW) made its IPO as the largest publicly traded software company, ever. As one of the most hotly anticipated listing in 2020, Snowflake began publicly trading at $120 per share and almost immediately jumped to $300 per share within a matter of minutes. With the never before seen hike in price, Snowflake also became the largest company to ever double in value on its first day of trading, ending with a value of almost $75 billion. 

    What is Snowflake?

    So, what exactly does Snowflake do? What is it that makes a billionaire investors like Warren Buffet and Marc Benioff jump all over a newly traded software company? It must be something special right? With all the speculation surrounding the IPO, it’s worth explaining what the company does. A simple explanation would be that Snowflake helps companies store their data in the cloud, rather than in on-site facilities. Traditionally, a company’s data is been stored on-premises on physical servers managed by that company. Tech giants like Oracle and IBM have led the industry for decades. Well, Snowflake is profoundly different. Instead of helping company’s store their data on-premises, Snowflake facilitates the warehousing of data in the cloud. But that’s not all. Snowflake has the capabilities of making the data queryable, meaning it simplifies the process for businesses looking to pull insights from the stored data. This is what sets Snowflake apart from the other data hoarding behemoths of the IT world. Snowflake discovered the secret to separating data storage from the act of computing the data. The best part is that they’ve done this before any of the other big players like Google, Amazon, or Microsoft. Snowflake is here to stay. 

    Snowflake’s Leadership

    Different than Silicon Valley’s tech unicorns of the past, Snowflake was started in 2012 by three data base engineers. Backed by venture capitalists and one VC firm that wishes to remain anonymous, Snowflake is currently led by software veteran, Frank Slootman. Before taking the reigns at Snowflake, Slootman had great success leading Data Domain and Service Now. He grew Data Domain from just a twenty-employee startup venture to over $1 billion in sales and a $2.4 billion acquisition sale to EMC. I think it’s safe to say that Snowflake is in the right hands, especially if it has any hopes of maturing into its valuation.

    Snowflake’s Product Offering

    We all know that Snowflake isn’t the only managed data warehouse in the industry. Both Amazon Web Service’s (AWS) Redshift and Google Cloud Platform’s (GCP) BigQuery are very common alternatives. So there had to be something that set Snowflake apart from the competition. It’s a combination of flexibility, service, and user interface. With a database like Snowflake, two pieces of infrastructure are driving the revenue model: storage and computing. Snowflake takes the responsibility of storing the data as well as ensuring the data queries run fast and smooth. The idea of splitting storage and computing in a data warehouse was unusual when Snowflake launched in 2012. Currently, there are query engines like Presto that solely exist just to run queries with no storage included. Snowflake offers the advantages of splitting storage and queries: stored data is located remotely on the cloud, saving local resources for the load of computing data. Moving storage to the cloud delivers lower cost, has higher availability, and provides greater scalability.  

     

    Multiple Vendor Options

    A majority of companies have adopted a multi-cloud as they prefer not to be tied down to a single cloud provider.  There’s a natural hesitancy to choose options like BigQuery that are subject to a single cloud like Google. Snowflake offers a different type of flexibility, operating on AWS, Azure, or GCP, satisfying the multi-cloud wishes of CIOs. With tech giants battling for domination of the cloud, Snowflake is in a sense the Switzerland of data warehousing. 

    Learn more about a multi-cloud approach

    Top of Form

    Bottom of Form

     

    Snowflake as a Service

    When considering building a data warehouse, you need to take into account the management of the infrastructure itself. Even when farming out servers to a cloud provider, decisions like the right size storage, scaling to growth, and networking hardware come into play. Snowflake is a fully managed service. This means that users don’t need to worry about building any infrastructure at all. Just put your data into the system and query it. Simple as that. 

    While fully managed services sound great, it comes at a cost. Snowflake users need to be deliberate about storing and querying their data as fully managed services are pricey. If deciding whether to build or buy your data warehouse, it would be wise to compare Snowflake ownership’s total cost to building something themselves.

     

    Snowflake’s User Interface and SQL Functionality

    Snowflake’s UI for querying and exploring tables is as easy on the eyes as it to use. Their SQL functionality is also a strong touching point. (Structured Query Language) is the programming language that developers and data scientists use to query their databases. Each database has slightly different details, wording, and structure. Snowflake’s SQL seems to have collected the best from all of the database languages and added other useful functions. 

     

    A Battle Among Tech Giants

    As the proverb goes, competition creates reason for caution. Snowflake is rubbing shoulders with some of the world’s largest companies, including Amazon, Google, and Microsoft. While Snowflake has benefited from an innovative market advantage, the Big Three are catching up quickly by creating comparable platforms.

    However, Snowflake is dependent on these competitors for data storage. They’ve only has managed to thrive by acting as “Switzerland”, so customers don’t have to use just one cloud provider. As more competition enters the “multicloud” service industry, nonalignment can be an advantage, but not always be possible. Snowflake’s market share is vulnerable as there are no clear barriers to entry for the industry giants, given their technical talent and size. 

    Snowflake is just an infant in the public eye and we will see if it sinks or swims over the next year or so. But with brilliant leadership, a promising market, and an extraordinary track record, Snowflake may be much more than a one hit wonder. Snowflake may be a once in a lifetime business.

    HPE vs Dell: The Battle of the Servers

    When looking at purchasing new servers for your organization, it can be a real dilemma deciding which to choose. With so many different brands offering so many different features, the current server industry may seem a bit saturated to some. Well this article does the hard work for you. We’ve narrowed down the list of server manufacturers to two key players: Dell and Hewlett Packard Enterprises (HPE). WE will help you with your next purchase decision by comparing qualities and features of each, such as: customer support, dependability, overall features, and cost. These are some of the major items to consider when investing in a new server. So, let’s begin.

    Customer Support – Dell

    The most beneficial thing regarding Dell customer support is that the company doesn’t require a paid support program to download any updates or firmware. Dell Prosupport is considered in the IT world as one of the more consistently reliable support programs in the industry. That being said, rumors have been circulating that Dell will soon be requiring a support contract for downloads in the future. 

    You can find out more about Dell Prosupport here.

    Customer Support – HPE

    Unlike Dell, HPE currently requires businesses to have a support contract to download any new firmware or updates. It can be tough to find support drivers and firmware through HP’s platform even if you do have a contract in place. HPE’s website is a bit challenging to use in regard to finding information on support in general. On a brighter note, the support documentation provided is extremely thorough, and those with know-how can find manuals for essentially any thing you need. Though, by creating an online account through HPE‘s website one can gain access to HPE‘s 24/7 support, manage future orders, and the ability to utilize the HPE Operational Support Services experience. 

    Customer Support Winner: Dell

    Dependability – Dell

    I’ll be the first to say that I’m not surprised whenever I hear about Dell servers running for years on end without any issues. Dell has always been very consistent as far as constantly improving their servers. Dell is the Toyota of the server world.

    Dependability – HPE

    Despite the reliability claims made for HPE’s superdome, apollo, and newer Proliant line of servers, HPE is known to have faults within the servers. In fact, a survey done mid-2017, HP Proliant’s had about 2.5x as much downtime as dell Poweredge servers. However, HPE does do a remarkable job with prognostic alerts for parts that are deemed to fail, giving businesses a n opportunity to repair or replace parts before they experience a down time.

    Dependability Winner: Dell

    Out of Band Management Systems

    In regard to Out of Band Management systems, HPE’s system is known as Integrated Lights-Out (iLO), and Dell’s system is known as Integrated Dell Remote Access Controller (iDRAC). In the past there were some major differences between the two, but currently the IPMI implementations don’t differ enough to be a big determining factor. Both systems now provide similar features, such as HTML5 support. However, here are a few differences they do have.

    Out of Band Management Systems – Dell

    Dell’s iDRAC has progressed quite a bit in recent years. After iDRAC 7, java is no longer needed, yet the Graphic User Interface is not quite as nice as the one. iDRAC uses a physical license, which can be purchased on the secondary market and avoid being locked in again with the OEM after end of life. Updates are generally a bit longer with iDrac.

    Out of Band Management Systems – HPE

    HPE’s ILO advanced console requires a license, buy the standard console is included. Using the advanced console can ultimately lock you in with the OEM if your servers go to end of life. Unfortunately, they can’t be purchased on the secondary market. Although, it’s been noted that you only have to purchase one product key because the advanced key can be reused on multiple servers, this is against HPE’s terms of service. Generally, the GUI with ILO advanced appears more natural and the platform seems quicker.

    Out of Band Management Systems Winner: HPE

    Cost of Initial Investment- Dell

    Price flexibility is almost nonexistent when negotiating with Dell, however with bigger, repeat customers Dell has been known to ease into more of a deal. In the past Dell was seen as being the more affordable option, but the initial cost of investment is nearly identical now. With Dell typically being less expensive, it tends to be the preference of enterprise professionals attempting to keep their costs low to increase revenue. Simply put, Dell is cheaper because it is so widely used, and everyone uses it because it’s more cost effective.

    Cost of Initial Investment- HPE

    HPE is generally more open to price negotiation, even though opening quotes are similar to Dell. Just like everything in business, your relationship with the vendor will be a much greater factor in determining price. Those that order in large quantities, more frequently, will usually have the upper hand in negotiations. That being said, HPE servers tend to be a little more expensive on average. When cost is not a factor, HPE leans to be the choice where long-term performance is the more important objective. HPE servers are supported globally through a number of channels. Due to the abundance of used HPE equipment in the market, replacement parts are fairly easy to come by. HPE also offer a more thorough documentation system, containing manuals for every little-known part HPE has ever made. HPE is enterprise class, whereas Dell is business class.

    Cost of Initial Investment Winner: Tie

    The Decisive Recap

    When it really comes down to it, HPE and Dell are both very similar companies with comparable features. When assessing HPE vs Dell servers, there is no winner. There isn’t a major distinction between the companies as far as manufacturing quality, cost, or dependability. Those are factors that should be weighed on a case by case basis.

    If you’re planning on replacing your existing hardware, sell your old equipment o us! We’d love to help you sell your used servers.

    You can start by sending us a list of equipment you want sell. Not only do we buy used IT Equipment, we also offer the following services:

    Apple’s Bug Bounty Program : Hacker’s Getting Paid

    How does one of the largest and most innovative companies in history prevent cyber attacks and data hacks? They hire hackers to hack them. That’s right, Apple pays up to $1 million to friendly hackers who can find and report vulnerabilities within their operating systems. Recently, Apple announced that it will open its Bug Bounty program to anyone to report bugs, not just hackers who have previously signed up and been approved. 

     

    Apple’s head of security engineering Ivan Krstic says is that this is a major win not only for iOS hackers and jailbreakers, but also for users—and ultimately even for Apple. The new bug bounties directly compete with the secondary market for iOS flaws, which has been booming in the last few years. 

     

    In 2015, liability broker Zerodium revealed that will pay $1 million for a chain of bugs that allowed hackers to break into the iPhone remotely. Ever since, the cost of bug bounties has soared. Zerodium’s highest payout is now $2 million, and Crowdfense offering up to $3 million.

    So how do you become a bug bounty for Apple? We’ll break it down for you.

     

    What is the Apple Security Bounty?

    As part of Apple’s devotion to information security, the company is willing to compensate researchers who discover and share critical issues and the methods they used to find them. Apple make it a priority to fix these issues in order to best protect their customers against a similar attack. Apple offers public recognition for those who submit valid reports and will match donations of the bounty payment to qualifying charities.

    See the Apple Security Bounty Terms and Conditions Here

    Who is Eligible to be a Bug Bounty?

     

    In order to qualify to be an Apple Bug Bounty, the vulnerability you discover must appear on the latest publicly available versions of iOS, iPadOS, macOS, tvOS, or watchOS with a standard configuration. The eligibility rules are intended to protect customers until an update is readily available. This also ensures that Apple can confirm reports and create necessary updates, and properly reward those doing original research. 

    Apple Bug Bounties requirements:

    • Be the first party to report the issue to Apple Product Security.
    • Provide a clear report, which includes a working exploit. 
    • Not disclose the issue publicly before Apple releases the security advisory for the report. 

    Issues that are unknown to Apple and are unique to designated developer betas and public betas, can earn a 50% bonus payment. 

    Qualifying issues include:

    • Security issues introduced in certain designated developer beta or public beta releases, as noted in their release notes. Not all developer or public betas are eligible for this additional bonus.
    • Regressions of previously resolved issues, including those with published advisories, that have been reintroduced in certain designated developer beta or public beta release, as noted in their release notes.

    How Does the Bounty Program Payout?

     

    The amount paid for each bounty is decided by the level of access attained by the reported issue. For reference, a maximum payout amount is set for each category. The exact payment amounts are determined after Apple reviews the submission. 

    Here is a complete list of example payouts for Apple’s Bounty Program

    The purpose of the Apple Bug Bounty Program is to protect consumers through understanding both data exposures and the way they were utilized. In order to receive confirmation and payment from the program, a full detailed report must be submitted to Apple’s Security Team.  

     

    According to the tech giant, a complete report includes:

    • A detailed description of the issues being reported.
    • Any prerequisites and steps to get the system to an impacted state.
    • A reasonably reliable exploit for the issue being reported.
    • Enough information for Apple to be able to reasonably reproduce the issue. 

     

    Keep in mind that Apple is particularly interested in issues that:

    • Affect multiple platforms.
    • Impact the latest publicly available hardware and software.
    • Are unique to newly added features or code in designated developer betas or public betas.
    • Impact sensitive components.

    Learn more about reporting bugs to Apple here

    LTO Consortium – Roadmap to the Future

    LTO – From Past to Present 

    Linear Tape-Open or more commonly referred to as LTO, is a magnetic tape data storage solution first created in the late 1990s as an open standards substitute to the proprietary magnetic tape formats that were available at the time.  It didn’t take long for LTO tape to rule the super tape market and become the best-selling super tape format year after year. LTO is usually used with small and large computer systems, mainly for backup. The standard form-factor of LTO technology goes by the name Ultrium. The original version of LTO Ultrium was announced at the turn of the century and is capable of storing up to 100 GB of data in a cartridge. Miniscule in today’s standards, this was unheard of at the time. The most recent generation of LTO Ultrium is the eighth generation which was released in 2017. LTO 8 has storage capabilities of up 12 TB (30 TB at 2.5:1 compression rate).

    The LTO Consortium is a group of companies that directs development and manages licensing and certification of the LTO media and mechanism manufacturers. The consortium consists of Hewlett Packard Enterprise, IBM, and Quantum. Although there are multiple vendors and tape manufacturers, they all must adhere to the standards defined by the LTO consortium.  

    Need a way to sell older LTO tapes?

    LTO Consortium – Roadmap to the Future

    The LTO consortium disclosed a future strategy to further develop the tape technology out to a 12th generation of LTO. This happened almost immediately after the release of the recent LTO-8 specifications and the LTO8 drives from IBM. Presumably sometime in the 2020s, when LTO-12 is readily available, a single tape cartridge should have capabilities of storing approximately half a petabyte of data.

    According to the LTO roadmap, the blueprint calls for doubling the capacity of cartridges with every ensuing generation. This is the same model the group has followed since it distributed the first LTO-1 drives in 2000. However, the compression rate of 2.5:1 is not likely to change in the near future. In fact, the compression rate hasn’t increased since LTO-6 in 2013.

    Learn how you can pre-purchase the latest LTO9 tapes 

    The Principles of How LTO Tape Works

    LTO tape is made up of servo bands which act like guard rails for the read/write head. The bands provide compatibility and adjustment between different tape drives. The read/write head positions between two servo bands that surround the data band. 

    The read-write head writes multiple data tracks at once in a single, end-to-end pass called a wrap. At the end of the tape, the process continues as reverse pass and the head shifts to access the next wrap. This process is done from the edge to the center, known as linear serpentine recording.

    More recent LTO generations have an auto speed mechanism built-in, unlike older LTO tape generations that suffered the stop-and-go of the drive upon the flow of data changes. The built-in auto speed mechanism lowers the streaming speed if the data flow, allowing the drive to continue writing at a constant speed. To ensure that the data just written on the tape is identical to what it should be, a verify-after-write process is used, using a read head that the tape passes after a write head.

    But what about data security? To reach an exceptional level of data security, LTO has several mechanisms in place. 

    Due to several data reliability features including error-correcting code (ECC), LTO tape has an extremely low bit-error-rate that is lower than that of hard disks. With both LTO7 and LTO8 generations, the data reliability has a bit error rate (BER) of 1 x 10-19.  This signifies that the drive and media will have one single bit error in approximately 10 exabytes (EB) of data being stored. In other words, more than 800,000 LTO-8 tapes can be written without error. Even more so, LTO tape allows for an air gap between tapes and the network. Having this physical gap between storage and any malware and attacks provides an unparalleled level of security.

     

    Learn more about air-gap data security here

    Scroll to top