What type of access control list defines the settings for auditing access to an object?

Controlling Access to Your Environment with Authentication and Authorization

Thomas W. Shinder, ... Debra Littlejohn Shinder, in Windows Server 2012 Security from End to Edge and Beyond, 2013

Understanding Dynamic Access Control

Dynamic Access Control (DAC) allows the enterprise administrator to easily apply and manage access and auditing to domain-based file servers. To accomplish this task, DAC leverages the following features:

Claims: authentication token

Resource properties: for the resource itself

Conditional expressions: expressions embedded in the permissions and auditing entries

While DAC is perceived as one of the biggest enhancements in Windows Server 2012 from the authentication perspective, it does not do all that by itself. DAC leverages both the Kerberos protocol and claims, where claims represent a piece of information that a trusted source makes about a specific entity.

The advantage of this feature is that now you can grant access to file and folders based on Active Directory attributes. The goal is to offer the capability to go beyond a traditional allow/deny based on user or group. Now you can leverage this feature to be compliant with business requirements that tie into conditions that will vary according to a series of parameters.

The core components of Dynamic Access Control are

Central Access Policy

Central Access Rule

Permissions Entries

Each Central Access Policy object can include one or more Central Access Rule objects, and each Central Access Rule object contains one or more permission entries as shown in Figure 7.1.

What type of access control list defines the settings for auditing access to an object?

Figure 7.1. CAP policies.

DAC Requirements

Although many administrators think that you need a Windows Server 2012-only environment to implement DAC, the fact is that this is not true. It is possible to leverage DAC capabilities in a mixed scenario, even when you have Windows XP. In this case, you will need to deploy the Windows Settings/Security Settings/Local Policies/Security Options/Microsoft network server: Attempt S4U2Self to obtain claim information policy, as shown in Figure 7.2, in order to allow the file server to obtain a network client principal's claims from the client's account domain.1

What type of access control list defines the settings for auditing access to an object?

Figure 7.2. MMC console.

In a nutshell, the requirements to deploy DAC on your environment are

From the Domain perspective

Extend the Active Directory schema

Windows Server 2012 Kerberos Distribution Center (KDC)2

-

Enable KDC support for claims, compound authentication, and Kerberos armoring policy.

-

For the client you will need to enable Kerberos client support for claims, compound authentication, and Kerberos armoring policy

From the File Server perspective

Windows Server 2012 File Server Role

From the Client perspective, when using Device Claim

Windows 8

Planning for DAC

One very important aspect of planning for DAC deployment in a scenario where servers are running Windows Server 2012 and clients are running Windows 8 is the DC placement. When claims support is enabled, Windows Server 2012 and Windows 8 will always use a Windows Server 2012 DC to authenticate. If your environment was not sized correctly, you can have an authentication bottleneck, and this can have a negative impact in scenarios where you have remote computers in a branch office that were authenticating to a local Windows Server 2008 R2 and after enabling claims, they will look for a remote Windows Server 2012 DC over a potential congested WAN link.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499804000078

Cloud Computing Infrastructure for Data Intensive Applications

Yuri Demchenko, ... Charles Loomis, in Big Data Analytics for Sensor-Network Collected Intelligence, 2017

7.3 Dynamic Access Control Infrastructure

DACI presents a virtual infrastructure to provide access control services to an on-demand cloud formation. As depicted in Fig. 13, the following basic security services are provided with DACI:

What type of access control list defines the settings for auditing access to an object?

Fig. 13. Security services of DACI.

Authentication and identity management service: provides authentication service, issues, and verifies attribute statements binding to authenticated subjects using the Security Assertions Markup Language (SAML) specification [33].

Authorization service: provides the authorization service compliant with the XACML-SAML profile [34].

Token validation service: issues and validates authorization tokens to improve the decision performance of the authorization service.

These services provide a basic infrastructure for managing access control in dynamic environments. Since CYCLONE use cases involve sensitive data such as sequences of human genome, we extend DACI services with three additional services pertinent to security: encryption service that provides protection of data at rest (or on the move), key management service for storing/exchanging keys, and distributed logging. Moreover, helper tools for assisting the specification and verification of policies are made available [35,36].

7.3.1 Dynamic trust bootstrapping

The initialization and deployment of a security infrastructure such as DACI in on-demand cloud provisioning over multiple cloud providers imply that there is a dynamic mechanism to establish trust between involved parties and to populate necessary information for the proper function of the infrastructure. This process, also known as trust bootstrapping, may involve the collection of keys/certificates from all partners, retrieval of list of identity providers (in a federated setting), and so on. Some of this contextual information needs to be provided in a preconfigured manner while other information can be retrieved automatically. For instance, bioinformatics researchers are often affiliated with an organization that is part of a larger network, such as EDUGAIN [37], where the retrieval of certain information with respect to identity of users can be automated.

The implementation of dynamic trust bootstrapping involves additional services such as context management that interplays with DACI components. In CYCLONE, we currently investigate how the bootstrapping process can be integrated to application deployment over a multicloud application management platform such Slipstream [28].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128093931000027

Access Control Lists

Dale Liu, ... Luigi DiGrande, in Cisco CCNA/CCENT Exam 640-802, 640-822, 640-816 Preparation Kit, 2009

Dynamic ACLs

To call a Dynamic or “Lock and Key” ACL an enhancement is a bit of a stretch since the feature was first introduced in IOS Release 11.1. However, because dynamic ACLs rely on extended ACLs in addition to the Telnet application and authentication, they are definitely a way of using ACLs in a novel way to solve a particular problem.

A dynamic ACL is applied to an interface just like any other ACL. The ACL blocks traffic from flowing through the interface until a user telnets to the router and is authenticated. Once they are successfully authenticated, an ACE is dynamically added to the ACL, which permits the user through for the duration of a timeout period or until the connection is idle for a certain amount of time.

In practice, this feature was very useful for extranet connections where a company wanted to grant external users limited access to internal resources for a temporary period of time. Dynamic ACLs aren't used much any more because client VPNs (IPSec or SSL) provide the same functionality along with encryption.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493062000130

Traffic Filtering in the Cisco Internetwork Operating System

Eric Knipp, ... Edgar DanielyanTechnical Editor, in Managing Cisco Network Security (Second Edition), 2002

Lock-and-key Access Lists

Allows authenticated access through an access list via a Telnet session.

The autocommand feature can be used to prevent users from entering the wrong commands. This can be done across all VTY ports, or on a per-user basis.

Remember to use the host keyword with the access-enable command, or else the entire dynamic ACL will be opened, instead of one particular host address.

Be sure your inbound access list doesn’t restrict Telnet access to your router, or else you will not be able to use the lock-and-key feature.

You can only create one dynamic access list per extended access list. Anything beyond the first one will be ignored. You can have multiple entries using the same dynamic-name in an extended ACL.

Dynamic access lists must have different names from any other named access lists defined in the router.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836562500088

MCSE 70-293: Planning, Implementing, and Maintaining a Security Framework

Martin Grasdal, ... Dr.Thomas W. ShinderTechnical Editor, in MCSE (Exam 70-293) Study Guide, 2003

Planning and Implementing Active Directory Security

Windows Server 2003 supports statically assigned authorization to resources. This information is used to determine whether access to a resource is granted or denied. This is also referred to as static access control. Administrators can control access to AD objects by assigning them security descriptors. The security descriptor consists of information regarding the object’s ownership, access control lists (ACLs), and auditing.

Head of the Class…

Understanding Static versus Dynamic Access Control

As you might guess, in addition to static access control, there is another type of access control called dynamic access control (not currently supported by Windows Server 2003). With static access control, the information used to grant or deny access is preconfigured. At logon, a user is assigned an access control token according to the user’s account information (such as group memberships). If that information is changed (for example, the user is added to a new group or removed from a group), the change does not become effective until the user logs off and then logs on to receive a new access token.

With dynamic access control, access information can be changed dynamically, and the system determines whether to grant access at the time the request is made, based on information at that time, instead of a token issued at logon.

The ACL holds the static access control information. There are two parts to the ACL in the security descriptor:

The discretionary access control list (DACL) The DACL contains the information about which users and groups are allowed (or denied) permission to access the object, and the level of access granted. The security descriptor can even be configured to control access to a particular attribute of an object.

The system access control list (SACL) The SACL specifies the events that should be audited (if auditing is enabled).

DACLs and SACLs are associated with each type of AD object in Windows Server 2003.

There are three types of standard permissions supported with AD: Full Control, Write, and Read. Each AD object has these three standard permissions available for use. In addition to these three standard permissions, there are a number of additional permissions that can be used to control access more granularly. These depend on the object type. For example, additional permissions that can be assigned for user objects include Create All Child Objects, Delete All Child Objects, Allowed to Authenticate, Change Password, Receive As, Reset Password, Send As, Read Account Restrictions, Write Account Restrictions, Read General Information, Write General Information, Read Group Membership, Write Group Membership, Read Logon Information, Write Logon Information, Read Personal Information, Write Personal Information, Read Phone and Mail Options, Write Phone and Mail Options, Read Public Information, Write Public Information, Read Remote Access Information, Write Remote Access Information, Read Web Information, Write Web Information, and Special Permissions (configured through the Advanced settings).

So how are all these additional permissions used? For example, you could use the Read Group Membership permission to specify which users and groups are allowed to read the group membership information for a particular user account, as follows:

1.

Select Start | All Programs | Administrative Tools | Active Directory Users and Computers.

2.

Click View | Advanced Features (this will place a check mark by it in the menu).

3.

In the left pane of the Active Directory Users and Computers console, expand the domain node and click the Users container.

4.

In the right pane of the console, right-click the user account for which you want to set access controls and select Properties.

5.

Click the Security tab. View the list of group and usernames in the top pane. When you select one, you will see the permissions assigned to it in the bottom pane. To add a group or user to this list, click the Add button.

6.

Click Allow or Deny for each permission to granularly configure the permissions on the user account object for each user or group.

Another important aspect of access control is ownership. When a user creates an object, that user is designated as the owner of the object. An owner/creator has full access to the objects that user owns. Ownership can be delegated by the current owner to someone else.

To view the DACL, SACL, and ownership information for an object (such as a user account), click the Advanced button on the Security tab and do the following:

Click the Permissions tab to view the DACL.

Click the Auditing tab to view the SACL.

Click the Owner tab to view the ownership information.

What type of access control list defines the settings for auditing access to an object?
Note

To get a detailed view of a particular access control entry (ACE), select the appropriate ACE on the Permissions or Auditing tab and click the Edit button.

When implementing Windows Server 2003’s AD, you need to provide security for your entire organization. AD allows for that by permitting you to put into action user authorization and built-in logon authentication. User authorization is used to provide specified clients access to objects such as folders. Built-in logon authentication is used for system permissions on the network.

You can also use trust relationships and Group Policy to provide security solutions for your AD network. Table 11.1 shows some AD security scenarios, along with solutions or tools you can use to plan and implement security.

Table 11.1. Scenarios and Solutions for a Stronger and More Secure Active Directory

ScenarioSolutions
You need to support and manage two forests in your AD security framework, and authentication across forests needs to be simplified. Use a forest trust, which is a trust between two Windows Server 2003 forests. This trust will create trust relationships between every domain in the two forests. They can be created only on the forest root domains in each forest. These forest trusts are transitive. They can be one-way or two-way trusts. Unlike a parent–child trust, which is automatically established (implicit trust), administrators must manually establish a forest trust (explicit trust).
You need to enforce strong password policies because you have seen the word password used as an actual client password. Use Group Policy to enforce strong password policies. Strong passwords are at minimum eight characters long and contain at least three or four characteristics, such as uppercase characters, lowercase characters, numeric digits, and symbols found on the keyboard (for example, !, @, $, #). They do not contain any part of the client’s user account name, words in dictionaries, or other easily guessed information.
You need to check event logs as part of your daily routine. Use Group Policy to enable the Audit Policy function. Checking event logs as a daily routine can allow you to quickly recognize a security risk.
You need to make sure no one is trying to guess at user’s passwords in an attempt to compromise security. Use Group Policy to use the Account Lockout Policy function on user accounts. This will limit the possibility of an attacker compromising the domain through frequent logon attempts.
You need to minimize the possibility of an attacker trying to crack a user’s password. Use Group Policy to enforce minimum and maximum password age policies on user accounts to decrease the possibility of an attacker compromising your domain. As a rule, it is best practice to have passwords expire every 30 to 90 days. The default password age is 42 days.
You need to authenticate and verify the validity of each client. Use public key cryptography to authenticate and verify each client’s identity.
You need to administer servers in your domain, but you do not want to be logged on as Administrator for long periods of time. Use the Run as command to perform administrative tasks on the servers without needing to log on with your administrative credentials.
You need to thwart attacks from people who might try to grant elevated user rights to another user account. Use security identifier (SID) filtering to stop the elevation of privilege attacks.
You have trade secrets and other confidential information on your network and you need to secure them. Implement smart card authentication to provide two-factor authentication, encrypt data on the disk with Encrypted File System (EFS), and protect data traveling across the network with IPSec.
You need to secure account passwords on domain controllers, member servers, and local computers. Implement the System Key Utility (syskey), which will provide strong encryption techniques to secure account password information.

The following are some basic security guidelines:

Avoid granting Full Control permissions over objects or Organizational Units (OUs) except when absolutely necessary, because this allows someone to take ownership of an object and also modify permissions on the object. The user will also have full control over all objects in that container unless inheritance of permissions is blocked.

Avoid changing the default permissions on AD objects, because this can cause unexpected results by creating access problems or reducing security.

Reduce the number of access control entries (ACEs) that apply to child objects. When using the Apply Onto option to control inheritance, not only do the specified objects inherit that access control, but also all child objects receive a copy of that ACE. Too many copies of this ACE on the network could significantly reduce network performance.

In the Windows Server 2003 family, ACLs feature single-instancing. Single-instancing works by storing only one instance of the ACL, even if multiple objects have identical ACLs.

If possible, assign permissions to groups rather than users. This makes management easier.

Generally, allow Read All Properties or Write All Properties permissions, rather than setting controls on individual properties, unless there are compelling reasons to do so.

Allow Read or Write access to property sets, rather than to individual properties.

What type of access control list defines the settings for auditing access to an object?
Note

Property sets are also called attribute sets. These are defined sets of attributes that represent the entire set in an ACL. Microsoft defines 10 attribute sets; and you can define custom attribute sets, but each attribute can be a member of only one set.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836937500154

Cisco Authentication, Authorization, and Accounting Mechanisms

Eric Knipp, ... Edgar DanielyanTechnical Editor, in Managing Cisco Network Security (Second Edition), 2002

How the Authentication Proxy Works

The authentication proxy works like this:

1.

A user initiates an HTTP session via a Web browser through the IOS Software Firewall and triggers the authentication proxy.

2.

The authentication proxy checks if the user has already been authenticated. If the user has been authenticated, the connection is completed. If the user has not been authenticated, the authentication proxy prompts the user for a username and password.

3.

After the user has entered their username and password, the authentication profile is downloaded from the AAA (RADIUS or TACACS+) server. This information is used to create dynamic access control entries (ACEs) which are added to the inbound access control list (ACL) of an input interface, and to the outbound ACL of an output interface (if an output ACL exists). For example, after successfully authenticating by entering my username and password, my profile will be downloaded to the firewall and ACLs will be dynamically altered and then applied appropriately to the inbound and outbound interfaces. If my profile permits me to use FTP, then an outbound ACL will be dynamically added to the outbound interface (typically, the outside interface) allowing this. If the authentication fails, then the service will be denied.

4.

The inbound and/or outbound ACL is altered by replacing the source IP address in the access list downloaded from the AAA server with the IP address of the authenticated host (in this case, the workstation’s IP address).

5.

As soon as the user has successfully authenticated, a timer begins for each user profile. As long as traffic is being passed through the firewall, the user will not have to reauthenticate. If the authentication timer expires, the user must reauthenticate before traffic is permitted through the firewall again.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836562500131

Emerging Security Challenges in Cloud Computing, from Infrastructure-Based Security to Proposed Provisioned Cloud Infrastructure

Mohammad Reza Movahedisefat, ... Davud Mohammadpur, in Emerging Trends in ICT Security, 2014

Provisioned access control infrastructure (DACI)

Developing a consistent framework for dynamically provisioned security services requires deep analysis of all underlying processes and interactions. Many processes typically used in traditional security services need to be abstracted, decomposed, and formalized. First of all, it is related to security services setup, configuration, and security context management that, in many present solutions/frameworks, is provided manually, during the service installation or configured out-of-band.

The general security framework for on-demand provisioned infrastructure services should address two general aspects: (1) supporting secure operation of the provisioning infrastructure, which is typically provided by the providers’ authentication and authorization infrastructure (AAI) supported also by federated identity management services (FIdM), and (2) provisioning a dynamic access control infrastructure as part of the provisioned on-demand virtual infrastructure. The first task is primarily focused on the security context exchanged between involved services, resources, and access control services. The virtualized DACI must be bootstrapped to the provisioned on-demand group-oriented virtual infrastructure (VI), the virtual infrastructure provider (VIP), and the virtual infrastructure operator (VIO). Such security bootstrapping can be done at the deployment stage.

Virtual access control infrastructure setup and operation is based on the above-mentioned DSA that will link the VI dynamic trust anchor(s) with the main actors and/or entities participating in the VI provisioning: VIP and the requestor or target user organization (if they are different). As discussed above, the creation of such a dynamic security association (DSA) for the given VI can be done during the reservation and deployment stage.

The reservation stage will allow the distribution of the initial provisioning session context and collection of the security context (e.g., public key certificates) from all participating infrastructure components. The deployment stage can securely distribute either shared cryptographic keys or another type of security credential that will allow validation of information exchange and application of access control to VI users, actors, and services.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114746000232

Extending cloud-based applications with mobile opportunistic networks

S. Pal, W. Moreira, in Pervasive Computing, 2016

3.4 Data Storage Protection

In a traditional on-premise application deployment model, sensitive information is stored within a secure boundary based on an organization/company’s policy, fixed security measures, and protocols. In mobile opportunistic/cloud-based networks, there is no fixed infrastructure for communication. Users must somewhat overcome the inherent uncertainty of an available contact opportunity, making them rely upon locally available infrastructures while hoping for the secure handling of their data (Li et al., 2015b). Therefore, how to efficiently protect user’s data inside such decentralized environments is especially challenging while storing that data locally on a device.

To address this, traditional encryption-based security mechanisms can be employed but the growing concern in the use of mobile-cloud networks is regarding the level of control required over a CSP and in what ways a cloud provider can prove itself trustworthy to its client’s encrypted data at storage, when the service provider itself holds the corresponding encryption keys (Grobauer et al., 2011). Therefore, it must be ensured that the cloud-managed user data are protected from vulnerable service providers via encryption in the data storage (Van Dijk et al., 2012).

Moreover, support for dynamically concurrent access control must be provided given the user’s high mobility (users access information in different locations from different devices). To address this, a mechanism that supports dynamic access control and employs fault-tolerant and data integrity schemes to guarantee proper handling of the user data should be implemented (Bowers et al., 2011).

An alternative mechanism for ensuring data integrity in data transmissions in a mobile opportunistic/cloud-based network can be achieved by employing a third-party auditor (TPA), which checks the integrity of stored data in an online storage (Wang et al., 2009). The use of a TPA eliminates the direct involvement of clients in the system, which is important for achieving the economic and performance advantages of cloud-based solutions. This solution also allows the support of data dynamics via most general forms of data operations, for example, block modification, insertion and deletion, which further ensures user privacy by fastening data integrity.

Along similar lines, the privacy-preserving data mining mechanisms can be used for securing sensitive data (Verykios et al., 2004). The major purpose of this data mining technique is to selectively identify the patterns for making predictions of stored data in a data center.

However, within the context of mobile opportunistic/cloud-based networks, it is difficult to structure a pattern for stored data due to the lack of a fixed infrastructure or routing protocol for data storage or data forwarding. These networks face challenges of mining a user’s PII, which has various privacy concerns that create potential security risks to the system. To this end, an anonymization algorithm for privacy-preserving data mining based on the generalization technique can be employed (Mohammed et al., 2011). This noninteractive anonymization algorithm provides a better classification analysis of stored data in a privacy-preserving manner.

On the other hand, a growing concern in mobile opportunistic/cloud-based networks is the ability of processing large amounts of data (ie, Big Data (Che et al., 2013)) by using the resource-constrained mobile devices (Qi and Zong, 2012). The Big Data mining extended the data mining techniques to allow emerging innovative approaches on a large scale that keep in mind the increased value of the user’s PII. When the volume of data increases in such networks, the concerns is the availability of networks for communications to satisfy the required bandwidth for data processing, which may introduce security and privacy issues to the system (Xu et al., 2014).

From the security point of view, a malicious insider can extract a user’s private information and use this information to violate data integrity using unwanted applications (eg, modification in certain part of data). Moreover, in Big Data mining this problem increases with the large volume and velocity of the data streams (Michael and Miller, 2013). Focusing on these issues in a mobile opportunistic/cloud-based network, a major security challenge is how to protect and conduct integrity assessments in such large scaled data, when a seamless network communication may not always be available. The security-monitoring management scheme may be employed to track, log and record such malicious activity (Marchal et al., 2014). By detecting unwanted data manipulation, this scheme allows prevention of further data loss by mitigating potential damage in the network.

Additionally, in the context of data mining, from the privacy point of view, risks (eg, user’s private data disclosure or distortion of sensitive information) may arise with the possible exposure of a user’s PII to a malicious network environment while data are being collected, stored, and analyzed by such a data mining process (Sagiroglu and Sinanc, 2013). In mobile opportunistic/cloud-based networks, this privacy issue may emerge from the potential risk of losing a user’s personal information during storage and manipulation of such data through this data mining process. Secure multiparty computation techniques can be employed (Hongbing et al., 2015), helping to filter malicious users during data communication by mapping the various data elements’ authenticity.

Another concern relates to the privacy and security risks that arise due to data leakage. To address such data leakage, a technique that breaks down sensitive data into insignificant fragments can be employed (Anjum and Umar, 2000). This ensures that a fragment will not contain all of the significant information by itself. By redundantly separating such fragments across various distributed systems, this will mitigate the data leakage problem.

Data leakage may result from the way data flows through these networks. This can be solved through the use of strong network traffic encryption techniques related to secure socket layer (SSL) and the transport layer security (TLS) (Ordean and Giurgiu, 2010). Furthermore, security mechanisms based on distributed cryptography, for example, high-availability and integrity layer (HAIL) (Bowers et al., 2009), can further prevent data leakage by allowing a set of servers to prove to a client that a stored file is intact and retrievable. In mobile opportunistic/cloud-based networks, HAIL can also prevent data leakage while managing file integrity and availability across a collection of independent storage services.

Moreover, from the data storage point of view, data backup is a critical aspect in order to facilitate recovery in the case of connection failures. For example, new privacy issues may arise in the network, for example, potential data loss from data backup in a third-party user within a malicious wireless network environment (Subashini and Kavitha, 2011). It is challenging to manage a privacy-aware data backup mechanisms for users in a mobile opportunistic/cloud-based network, because there are no contemporary paths available between any pair of nodes at a given time. Defense mechanisms like strong security schemes at data storage are therefore needed. Such mechanisms may use attribute-based encryption techniques to protect a user’s data as a way to mitigate data backup issues (Zhou and Huang, 2013). Within the context of mobile opportunistic/cloud-based networks, this feature allows data backup, preventing potential disclosure of a user’s sensitive information.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128036631000140

AI and Cloud Computing

Zhen Yang, ... Xing Li, in Advances in Computers, 2021

3 Our model: Protecting personal sensitive data security in the cloud with blockchain

In order to address the aforementioned issues, we design a blockchain-based model, which can protect personal sensitive data in the cloud. The terminologies used in our model and their meaning are listed in Table 1:

Table 1. Terminologies in our model.

SymbolsMeaning
DOData owner
CSCloud server
DUData user
dbsThe encrypted sensitive data which is divided into many data blocks
dbliThe i-th block of the encrypted sensitive data (i = 1,…,n)
dbhiThe hash of dbli (i = 1,…,n)
dhAll dbhi (i = 1,…,n)
MHTMerkle Hash Tree
drhThe root hash of the MHT whose leaves are dbhi (i = 1, …,n)
dkDUDecryption key of the sensitive data for DU
pkDUPublic key of DU in Ethereum blockchain system
skDUSecret key of DU in Ethereum blockchain system
⟦dkDU⟧Ciphertext dkDU that is encrypted with pkDU

Source: author.

Our model contains three major components:

Data Owner (DO): DO is the source of personal sensitive data. It has the ownership of the data.

Data User (DU): DU has the demand of accessing DO's personal sensitive data in the cloud.

Cloud Server (CS): CS offers cloud storage service to DO and DU.

In addition to DO, DU and CS, our model also includes a smart contract based on Ethereum blockchain. The structure of our model is shown in Fig. 2. The underlying Ethereum blockchain is a public ledger data structure applied in the Ethereum cryptocurrency system, which is maintained by arbitrary nodes on the Internet with PoW consensus mechanism. PoW consensus and Ethereum network, make Ethereum blockchain untamperable.

What type of access control list defines the settings for auditing access to an object?

Fig. 2. Our model for personal sensitive data management in cloud environment.

Source: author.

In our model, each of the three major components (DO, DU, CS) has an Ethereum account, which holds a pair of keys (public key and private key) to support encrypted communication and digital signature. Thus, communication between DO, DU and CS is implemented via Ethereum account transaction mechanism, which guarantees authenticity of communication content with digital signature.

The smart contract based on Ethereum blockchain, offers the feature of recording data access control policies and operation logs into the immutable blockchain structure. The smart contract is designed and deployed by CS to automatically respond to the request from DO/DU. CS can open the contract source code and offer a period of open test for any DO/DU. The contract response first verifies the request identity and authorize permission. Then the contract updates the data access control policy, or records the data operation log after verification.

In our model, data access policy can be granted or revoked by DO in a fine granularity. DO outsources its personal sensitive data in ciphertext form. Therefore, each access control policy includes the data decryption key (dk), which is prepared for the authorized DU. In Fig. 2, updating access control policy is interactions between DO and the smart contract. Because no data operation is involved in access control policy updating, there is no need for CS' participation; this function is triggered by the contract directly. In this way, DO can ensure its data ownership and control the access policy in a fine granularity.

Our model contains a data operation protocol, fused with blockchain-based access policies, to support data transparency and auditability. In this protocol, DO has the right of reading and writing data, whereas DU can only read the data when it is authorized. Our data operation protocol includes both data flow and metadata flow as shown in Fig. 2. For the metadata flow, DO/DU who acts as the initiator, first sends metadata request to the contract. After the contract respond to the request to verify a permission, the metadata of data operation is stored as log in the Ethereum blockchain. For data flow between DO/DU and CS, data operation will be finally executed only after it is confirmed with the log in Ethereum blockchain. In this way, all executed data operations have log records, which cannot be tampered, and the logs are stored in the transparent Ethereum blockchain.

3.1 Identities and pseudonyms

While users and CSP have their own blockchain account, the key pairs and addresses are the unique pseudonym identifier of their identities. The Traditional Public Key Infrastructure (PKI) is centralized, and the PKI management server is not only required to be trustable by all users, but also is tasked to combat various attacks on users' identity in an open network environment. In order to remove these requirements from the cloud server, in our model, we enable all the actors in our system to choose a unique sequence of number as their own private key. Each actor uses a number generator to generate a random stream of numbers; then part of this stream is arbitrarily taken into SHA256 hash calculation.

SecretKey=SHA256RandomStream

Subsequently, we use the elliptic curve cryptographic algorithm defined by the secp256k1 standard [45]. On the elliptic curve field, multiply the SecretKey to get the public key of the user.

PublicKey=SecretKey⁎g

In the above formula, g is the generator of the elliptic curve domain, and ⁎ is the multiplication in the elliptic curve domain. The difficulty of the discrete logarithm calculation, guarantees security of the private key.

In addition, each user can get a unique pseudonym which is derived from RIPEMD160 (RACE (Research and development in Advanced Communications technologies in Europe) Integrity Primitives Evaluation Message Digest 160) hash of its PublicKey. Thus every user has a 160-bit pseudonym called address.

Address=RIPEMDPublicKey

3.2 Blockchain-based dynamic access control mechanism

Utilizing the Event Mechanism in the Ethereum blockchain, we design our data access control policy, as shown in Fig. 3.

What type of access control list defines the settings for auditing access to an object?

Fig. 3. Structure of share event as data access policy.

Source: author.

In this figure, the structure of Share Event as data access policy consists six parts:

1.

The event type as “Share,” meaning the access control policy of data sharing.

2.

In the second part, “Data User Address” is the authorized DU identifier filled with its Ethereum account address.

3.

“Data RootHash” (drh) field is the only data identifier which is computed by building a MHT for the data blocks.

4.

“Timestamp” field is the access policy generation time point.

5.

“Authority” field represents the authorization value for DU to the data after this policy is updated: 1 means access authorized and 0 means authorization revoked.

6.

The last part is data decryption key distributed to DU: the value is 0, when the value of “Authority” is 0. The last field only has a legal content when “Authority” is 1: dk encrypted by the public key of DU account, which is expressed as ⟦dkDU⟧.

The dynamic updating of the access policy enables the dynamic access control sub-protocol, which is shown in Fig. 4. The major steps are listed as below.

What type of access control list defines the settings for auditing access to an object?

Fig. 4. Protocol of dynamic access policy updating.

Source: author.

1.

If DU wants to download data of DO from CS, it first needs to send an Ethereum blockchain transaction to DO to pass drh of the requested data.

2.

DO can independently decide whether or not to share data with DU after receiving the Ethereum blockchain transaction. If DO would like to share the data, it first calculates dkDU for DU, then encrypts dkDU with the pkDU of DU to generate the ⟦dkDU⟧.

3.

DO sends a Ethereum blockchain transaction to the cloud contract passing DU address, drh, authority value and ⟦dkDU⟧, and assigns the access policy update function of the contract to respond the transaction. When DO wants to revoke DU's access permission for drh, DO sends a transaction including DU address, drh and revoked authority value to the cloud smart contract. This revocation transaction can also be responded by the access policy updating function of the cloud smart contract.

4.

The cloud smart contract verifies if the message sender is DO, who is the only actor that can share data by the contract.

5.

If the verification of DO identity pass through, share event is triggered inside the cloud contract, meaning the authorization has been granted to DO to save access policy into the logs on the blockchain. In a share event, DU address, requested drh and the ⟦dkDU⟧ along with timestamp are recorded.

In the cloud smart contract, its access policy updating function can be triggered by a transaction, and then, the new access policy can be stored into Ethereum blockchain. After receiving a transaction requesting to update access policy, the contract first verifies the validity of the transaction by transaction signature and sender identity. If the transaction is signed and sent by DO of drh, the contract will fill in the different information parts (as listed above) of share events and generate the new access control policy event. In this way, a blockchain structure is formulated as the follows (Fig. 5):

What type of access control list defines the settings for auditing access to an object?

Fig. 5. Structure of blockchain in our model.

Source: author.

Here, except for the information parts of timestamp (T), hash value of the previous block (Prevhash), random nonce (N) and transactions (Txs), each block includes the hash digest of events (e1, …, ex) and logs (l1, …, ly). As shown in Fig. 5, Prevhash part makes the relation between adjacent blocks, which can be expressed by the equation:

hashTnPrevhashnNnTxsnhashe1…hashexhashl1… hashly=Prevhashn+1

When an event recording access policy is tampered by some malicious nodes, the hash chain between the blocks, including the event and its following block is broken. And the hash chain relation of blockchain is maintained by all the nodes of Ethereum network. Thus, the access control policy in our model is untamperable; and the blockchain-based access policy supports dynamic access control for DO in fine granularity.

3.3 Cloud data operation protocol fused with blockchain-based access policies

When personal sensitive data is outsourced to the cloud, it needs to be encrypted into ciphertext data blocks (dbs); operations of data, including data reading and data have to involve CS. In order to improve security, transparency and auditability level of data operation, we establish in our model a cloud data operation protocol fused with blockchain-based access policies. The data operation protocol not only can control access of cloud data with access policies in the blockchain, but also can ensure all executed operations having their corresponding untamperable logs for transparency and auditability. This data operation protocol is further divided into two sub-protocols, including data writing and data reading.

3.3.1 Data writing sub-protocol

In our model, only DO has permission on data writing, including data uploading and updating. To identify DO's identity, the cloud smart contract maintains a legal data list, which contains all drhs and their corresponding DOs' addresses.

Data uploading sub-protocol is shown in Fig. 6. It contains following steps:

What type of access control list defines the settings for auditing access to an object?

Fig. 6. Sub-protocol of data uploading.

Source: author.

1.

DO prepares dbs and its identifier drh.

2.

DO sends dbs to CS.

3.

DO sends a transaction including drh to the cloud smart contract and assigns a data uploading function to respond. After the contract receives the transaction, drh and its corresponding DO address are saved into the legal data list, which is a temporary array in the cloud smart contract.

4.

CS saves received dbs into a temporary space. Then CS computes drh with dbs and sends drh to the contract, assigning data uploading confirmation function to respond.

5.

The contract compares drh from CS with drh in its temporary array. If an identical match of drh exists, drh and its corresponding DO address are taken from the temporary array to save in the legal data list as the contract's state variable. Then, the contract uses blockchain event to generate a log into Ethereum blockchain for data uploading operation. Otherwise, the contract terminates the current sub-protocol, if there's no match of drh.

6.

CS runs a script program to watch new uploading log in the Ethereum blockchain.

7.

CS matches dbs in the temporary space with the new uploading log. If an identical match of drh exists, CS finally saves uploaded dbs as legal storage. Otherwise, CS refuses to execute final confirmation and deletes unmatched dbs.

The structure of data uploading log is shown in Fig. 7.

What type of access control list defines the settings for auditing access to an object?

Fig. 7. Structure of upload log for data uploading.

Source: author.

In this figure, the structure of upload log for data uploading consists four parts:

1.

The log type is defined as data uploading.

2.

“Data Owner Address” is the identifier of DO.

3.

“Data Roothash” means drh, the identifier of the data.

4.

The “Timestamp” represents the timepoint of this operation.

The log is generated by the cloud smart contract. The transaction information is filled into corresponding parts.

In our model, the data updating operation sub-protocol is similar with that of data uploading, except for the following two aspects:

For the data flow, DO only sends part of data (data blocks to be updated) to CS.

For the metadata flow, DO uses both old drh and new drh to identify the updating operation. In the match step of the cloud contract and CS final confirmation, both of old drh and new drh should be identically matched. Besides, our data updating log uses the two drhs instead of drh, as is shown in Fig. 8.

What type of access control list defines the settings for auditing access to an object?

Fig. 8. Structure of update log for data updating.

Source: author.

3.3.2 Data reading sub-protocol

Reading data operation in the cloud environment is usually called data downloading. Our data downloading sub-protocol shows a little difference for DO and DU, as shown in Fig. 9.

What type of access control list defines the settings for auditing access to an object?

Fig. 9. Sub-protocol of data downloading.

Source: author.

The data reading sub-protocol contains the following steps.

1.

Downloader (DO/DU) sends a transaction including drh to the cloud contract and assigns the data downloading function to respond.

2.

The contract verifies whether the address of the transaction source account is the same as the address of DO.

3.

If the transaction source account is the same as the address of DO, the contract saves DO downloading logs into Ethereum blockchain.

4.

If the transaction source is not the same as DO's address, the contract saves Event1 into the blockchain.

5.

CS runs a script program to watch Event1 from the blockchain. After a new Event1 is watched, CS queries share event recording drh and DU address from the blockchain.

6.

If no matching share event exists in the blockchain or the latest entry of matching share events holds an access authority value of 0, CS terminates this protocol.

7.

If matching share events exist in the blockchain and the latest entry of them holds an access authority value of 1, CS extracts ⟦dkDU⟧ from the access policy and sends it to DU in a transaction. At the same time, CS sends a transaction including drh and DU address to the contract to confirm downloading.

8.

In the contract, data downloading function is triggered by the confirming downloading transaction from CS. The contract then saves DU downloading log into the blockchain.

9.

CS watches new downloading log from the blockchain with the local script program.

10.

CS matches the new downloading log with existing cloud data. If the log matches existing drh, CS sends the ciphertext data to DU. Otherwise, CS refuses to execute downloading operation.

The structure of log for data downloading is shown in Fig. 10. In the data downloading sub-protocol, the cloud contract fills corresponding information into every field to generate a downloading operation log. Here “download” field defines log type of data downloading. Then the downloader identity, drh of the downloaded data and the timepoint of downloading operation is followed.

What type of access control list defines the settings for auditing access to an object?

Fig. 10. Structure of log for data downloading.

Source: author.

Finally, DO or authorized DU decrypts downloaded dbs to read the plaintext data content.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S006524582030084X

A novel access control protocol for secure sensor networks

Hui-Feng Huang, in Computer Standards & Interfaces, 2009

To the best of our knowledge, most previous key pre-distribution schemes cannot be easily implemented as a dynamic access control because all the old secret messages of existing nodes have to be changed once a new sensor node is added [3,4,5,8]. Therefore, with regard to efficiency and communications, we compare the proposed protocol with Zhou et al.'s [11] which is an access control scheme based on ECC (elliptic curve). In the proposed method and Zhou et al.'s protocol, the most expensive operation is the point multiplication of the form kP for k ∊∊ Zn⁎ and P is a cyclic group of points over an elliptic curve [14,15]. Compared to RSA, ECC can achieve the same level of security with smaller key sizes [14,15]. Therefore, under the same security level, smaller key sizes of ECC could offer faster computation, as well as memory, energy and bandwidth savings.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0920548908000676

Which of the following is a file sharing protocol that allows users to access files and folders?

Feature description. The Server Message Block (SMB) protocol is a network file sharing protocol that allows applications on a computer to read and write to files and to request services from server programs in a computer network. The SMB protocol can be used on top of its TCP/IP protocol or other network protocols.

What process identifies and grants access to a user who is trying to access a system group of answer choices?

Authorization is a process by which a server determines if the client has permission to use a resource or access a file. Authorization is usually coupled with authentication so that the server has some concept of who the client is that is requesting access.

Which firewall rule group must be enabled to allow for the remote use of the Task Scheduler snap in?

To enable Remote Administration in Windows Firewall, use the command netsh advfirewall firewall set rule group=”Remote Administration” new enable=yes. This will enable remote management for any MMC snap-in.

What are the rules that affect what happens to permissions when a file or folder is created copied or moved?

By default, an object inherits permissions from its parent object, either at the time of creation or when it is copied or moved to its parent folder. The only exception to this rule occurs when you move an object to a different folder on the same volume. In this case, the original permissions are retained.