Sunday, January 15, 2017

Encryption and Key Management for Regulated Workloads: Questions for Your Cloud Service Provider.



Background and Context

Much has been written by cloud service providers lately in over-simplified glossy marketing and website articles about their encryption and data protection services. Encryption in the cloud is a hot topic, and it is crucial for cloud service providers to design and implement secure services with broad applicability across their user-base.

However, regulated clients need to give specific and focused attention to aspects that pertain to the compliant and secure handling of regulated data. The intent of this note is to outline key considerations for clients and cloud service providers serving regulated clients.

Important Note: Each of the many global and local regulators are taking a different position on the topics and questions below. It is appropriate to assume that regulators will take a cautious approach during early adoption of cloud infrastructure. Also, regulators may view IaaS and SaaS services as presenting different levels of risk. So far, SaaS services have provided easier adoption paths. However, that may be a point in time factor that may not hold in future.

Introduction

As soon as clients think about placing sensitive, private or regulated data into a cloud hosted service, the question arises about how it is protected from unauthorized access. While traditional identification management and access control solutions are useful, they are not enough as there are multiple potential points of circumvention. Therefore, additional layers of protection are required, and the most obvious of these is encryption.

Encryption is a way of encoding and protecting data using an algorithm seeded by keys. Data that has been encoded by an encryption algorithm is said to be encrypted. To decrypt data from encoded back to the original form requires knowledge and access to a key.

Encryption algorithms are typically incredibly sophisticated and to prevent keys from being guessed, they are long sequences of bits. Even if a user tried to guess a key using multiple attempts at a brute-force, it could take many hundreds of years of super-computing power to gain access to decrypted data.

Therefore, encryption is typically considered the most secure way to protect data stored in a cloud service.

Note: there are other approaches such as tokenization (a one-way encryption mechanism) and obfuscation.

Points of Potential Encryption Layers

The consumer of a cloud service needs to be extremely mindful that performing encryption is possible in multiple places within the ecosystem. Typically, a client may wish to encrypt in multiple layers.

Layers include:
  • Individual storage drives (both spinning and solid-state) attached to servers: current-generation storage drives can typically self-encrypt and decrypt data at the I/O operation level. The manufacturer usually sets default encryption keys, but these can be changed using a bios utility or other drive firmware configuration software. Self-encrypting drives were primarily designed to protect the “fall off the back of a truck” transport situations which might are more prevalent in traditional data centers, but are highly unlikely given the business controls and practices adopted by cloud service providers. Therefore, self-encrypting drives are not considered to provide an acceptable level of protection as data was likely in unencrypted form through multiple layers of infrastructure before reaching storage.
  • A storage sub-system such as a NAS or SAN device providing file, block or object service: All cloud service providers provide pre-configured storage services that can be mounted to servers using standard protocols, for example, NFS. Additionally, some cloud service providers enable their clients to deploy software-defined storage products and frameworks onto bare metal servers. These can include IBM Spectrum Accelerate, IBM Spectrum Scale, and EMC ScaleIO, as well as many others. Storage subsystems typically provide encryption mechanisms. While encryption at this layer can be useful, typically it is unacceptable, as data typically moved in clear from the operating system to the storage service.
  • The Operating System (OS), for example, Windows or Linux, is often an appropriate layer in which to include encryption capabilities. Encryption libraries, from the OS vendor or 3rd parties, can be installed and configured to ensure that the data contained within all IO operations, whether to local drives, or to a mounted file system device, are automatically encrypted on write, and decrypted on read. Encryption and decryption align with the operating system file access permissions.
  • A Hypervisor
  • Middleware capabilities such as a database or message queuing service can encrypt data above the operating system. This approach is typically built into middleware products to remove any dependency on an operating system framework or capability.
  • Applications: This method is used to remove the dependency on any lower level service such as middleware, OS or storage. Given that some clients are concerned that application data could is accessible in clear, from memory, network communication or storage, they take the very guarded approach of encrypting data within application operations.
  • At the Customer Premises, before transmission to a Cloud Service. A cautious client might determine that no clear or unencrypted data transfer to a cloud service. Client-side encryption is a highly cautious approach, but it certainly ensures that there is little chance or possible access or decryption from within the cloud service. Some cloud services add value through their ability to access and perform analytic or semantic operations on data. Such services would most unlikely be unable to process encrypted input data.

Keys

Central to the function of encryption and decryption algorithms are the keys uses to drive the algorithms. For the current standard of AES encryption algorithms, keys are typically 128, 192 or 256 bits in length. While the purpose of this article is not to go into detail about the algorithms or keys, multiple types of key might be used to drive an algorithm, such as an overall master key, key encrypting keys which are used to protect encryption and decryption keys, and the encryption and decryption keys.

Key Storage

While it may seem obvious, let’s be careful to note that keys of any kind need to be subject to safe handling such that their access is protected such that a malicious actor could not gain access to be able to decrypt protected data.

Firstly, some regulated clients question whether keys can ever storage is possible inside a cloud service. Regulated customers may choose to insist on storing keys on their premises. We’ll touch on the reasons for this in the key ownership section below. The client restricting key storage to their premises could have a severe impact on their ability to leverage services built by a cloud service provider or 3rd party such as a SaaS provider. Either of these providers is likely to make assumptions about key storage which would not include access to a key repository outside the cloud service.

Note that placing a key-store outside the premises of a cloud service provider, will introduce network latency to key-access operations requested from inside the cloud service provider. Depending upon the number of key-access operations, this may become a deployment architecture consideration.

 Regardless of where to place a key-store, it is likely to accessed and operated using one of the following mechanisms:

  • Unsecured flat file or database – never a recommended approach!
  • Within an application’s source code or configuration – not recommended!
  • Using a secure software key store (e.g., macOS Keychain)
  • Using a key management and encryption framework, eg., HyTrust or IBM Cloud Data Encryption Services
  • Using a Hardware Security Module (HSM), a trusted attack resistant security appliance that resides on the client premises or the cloud service provider facilities. Most cloud service providers offer a form of HSM device. Some offer a virtual software version, others such as IBM, offer a full hardware appliance which is resident on a client’s VLAN. Key users, such as an application or storage encryption framework, use standard secure APIs to access the HSM which can also service encrypt/decrypt operations.

The Regulatory Consideration: Key Ownership and Attestation


For a regulated client to be compliant with anticipated regulatory requirements, the current thinking is that they will find it necessary to be able to assert and attest (meaning to provide documentary evidence) that the client, and only the client, owns and has access to keys.

If this regulatory position fully plays out as currently anticipated, it will mean some if not all of the following:
  • A client cannot use keys owned, created, shared, known or manipulated by a cloud service infrastructure provider or any other cloud ecosystem partner, such as a SaaS provider
  • A client must generate, own and not share encryption keys
  • A cloud service provider can have no access to a client key store, including those provided by the cloud, such as an HSM. Only a client may have secure access to a key store.
The key question for regulated clients is whether they can assert and attest regulatory compliant key ownership when using the key management framework and encryption solutions provided by cloud service providers. The Line of business developers, based upon assurances or marketing by cloud service providers, have implemented regulated business capabilities within cloud service providers' embedded services. The associated certification staff will need to determine whether the security approach of embedded services is compliant with regulatory standards.


As a closing thought on key-ownership, I should note that some cloud providers do enable their clients to assert and operate security and encryption frameworks that will allow regulatory key ownership attestation. IBM Cloud enables clients to construct uniquely compliant solutions with highly granular control of encryption and key management solutions, in harmony with its customers' current on-premises frameworks and approaches.

Key Re-Key - Changing Keys

Another important consideration of key management is the frequency with which to change keys. To improve the security of key management, a client will typically take the decision to change keys at a periodic rate, e.g., monthly.

When changing keys, it is paramount to be mindful of the lost access to previously encrypted data unless:

  • Old keys are retained and associated with a date range period
  • Data encrypted with the old key is decrypted and re-encrypted with the new key. Given the quantity of stored data, this may create a time consideration, or a period during which data is temporarily unavailable.
  • If neither of the above, the old key is destroyed access to any data encrypted with that key is no longer available. Given key destruction, encrypted is effectively deleted and destroyed. Decryption of previous data is impossible after key deletion.

Key Deletion and its Relationship to Data Deletion.

In a prior point, I noted that rigorously deleting a key is akin to data deletion. While this may seem like an obscure point, maybe in a cloud environment, it is a more appropriate way to address data deletion and destruction, than delayed overwrite algorithms or demagnetization. Key deletion immediately prevents unauthorized access to data.


CONCLUSIONS and NEXT STEPS

In conclusion, regulated clients need to consider:

  • Where in an ecosystem to perform encryption and decryption
  • How and where to store and manage keys
  • How to assert and attest key ownership
  • Whether frameworks like IBM Cloud Data Encryption Service, HyTrust or others can assist
  • Under which circumstances customers may use embedded services that use keys managed or provided by a cloud service provider or 3rd party SaaS provider.

Tuesday, January 3, 2017

Attestation… Does Your Configuration Drift?

Happy New Year!

Now that we've relaxed, celebrated the holidays, and rung in a new year with optimism and hope, it is time to get right back to work! :-)


To begin 2017, let’s look at attestation and why it is necessary for regulated cloud environments.

Within the context of regulated industries, attestation is the ability to provide documented evidence to prove an assertion.  Such evidence might be needed to establish the “good” or “correct” configuration of a server hardware, server BIOS, embedded firmware, hypervisor, container, operating system, device drivers, middleware, and applications. 

As an example, a good configuration of an application environment means that every element of its executable code, libraries and configuration files, has traceable lineage back through build control and source control systems.  Typically, all components are appropriately licensed and have gone through appropriate vulnerability assessment processes.  Any change could only be made to code in a regulated application by following change-control processes updating attestable evidence.

While change control processes within the software development lifecycle, and DevOps mechanisms are well understood, perhaps there has been less ability to attest configurations in a regulated cloud environment.  This inability is partly because certain infrastructure responsibilities may shift from the regulated entity to a cloud service provider.

For example, can your cloud service provider attest any evidence about the safe configuration of their server, firmware or hypervisor?  If they do, what safeguards and change control mechanisms are in place to cover changes to the configuration, whether planned, unintentional or malicious, a situation known as “configuration drift.”

Unless a cloud service provider can both attest and ensure a known configuration on an ongoing basis, the most cautious regulated firms will typically require bare metal capabilities of their cloud service provider.  Bare metal uniquely enables the regulated entity to control the configuration, detection, and attestation.

Make no mistake, attestation of configuration across many tiers of infrastructure is necessary but can be a burdensome and expensive challenge.  Some frameworks and tools simplify the operationalization of attestation.  For example, take a look at Cloud Raxak, a cloud compliance firm founded by former IBMer and friend, Sesh Murthy. Cloud Raxak documents configuration, and detects/addresses drift from boot time onwards through multiple stack tiers. https://www.cloudraxak.com

Attestation will be a recurring theme for regulated cloud in 2017, a year in which we can finally expect to see a widespread and accelerated adoption of Cloud Service Providers by regulated firms.