DAASI International Glossary

On this website, we use many technical terms and abbreviations. Unfortunately, this cannot be avoided due to the highly specialised and technical work of DAASI International.

In order for you to not have to look up every term on other websites, here is a glossary with all important terms and information, which gives you more precise insights into the work of DAASI International.

AAI stands for Authentication and Authorization Infrastructure. An AAI allows for seamless access to digital resources. Depending on the respective rights assigned to users within the AAI and the federation where AAI technology is used, a user can access their account of the home organisation in the AAI to access any resources in the whole federation of organisations.

AAIs are crucial for cooperation and collaboration, for instance in the higher education and research sector. In Germany, the AAI of all higher education institutions is the DFN-AAI.

Shibboleth, an implementation of the SAML standard, is one of the most often used software to participate in the DFN-AAI.

The German National Research and Education Network (Deutsches Forschungsnetz) is a communication network for science and research in Germany and connects German universities and research institutions with the internet by providing the respective backbone. The DFN is integrated in the European and global group of research and science networks and represents the German universities in matching European initiatives and institutions, particularly GÉANT.

The independent, non-profit DFN association (association for the promotion of a German research and education network) administers the German National Research Network and promotes its advancement and utilisation. In addition to the internet connectivity, it provides numerous value-added services to the member institutions, for example the DFN-PKI, security and legal advice, the DFN-AAI etc.

Here you can find an overview of the DFN services.

Digital Humanities, also called eHumanities, describe the application of digital methods in humanities and cultural studies.

In 1949, the pioneer Pater Roberto Busa already started to use IBM mainframe computers for the creation of a word index for the works of Thomas von Aquin. In a way, this makes him the founder of computed-based philology and Digital Humanities.

Henceforth, many disciplines of humanities and cultural studies have gained knowledge through computer-based processes and the systematic use of digital resources and so-called virtual research environments for humanities. For this purpose the term eHumanities was adopted on the basis of the term eScience used in the natural sciences, in which the e stands for enabling. Nowadays, the term Digital Humanities is more common.

Important projects in the field of Digital Humanities are:

  • TUSTEP (Tuebingen System of Text Processing tools), a tool to scientifically edit text data
  • TextGrid: Virtual research environment for the humanities; a platform with the objective to support the access and exchange of information in humanities and cultural studies by using modern information technology and infrastructure
  • DARIAH (Digital Research Infrastructure for the Arts and Humanities), an European research network for the development of infrastructures for humanistic research environments, which integrates respective National projects, like DARIAH-DE
  • HRA (Heidelberg Research Architecture), a virtual research environment for transcultural research in the excellence cluster „Asia and Europe“ at the University of Heidelberg
  • AARC (Authentication and Authorization for Research and Collaboration), an EU-funded project for the development of a global federation infrastructure named eduGAIN.

DSML (Directory Service Markup Language) enables the exchange of directory data through XML. Basically the XML version of LDIF.

Similar to a political federation of independent countries, a federation in the IT is a coalition of organisations in a bigger IT-based infrastructure. The characteristics of a federation are:

  • every single organisation has its own internal structure
  • trust between the individual organisations, which is reinforced by contracts
  • the participating organisations agree to the standards of data exchange
  • there is a central entity that brokers contracts, monitors compliance with standards and operates central federation components like the discovery Service

Essentially, it is an alliance built on trust. In the strict sense of the term (also called Federated Identity Management) the alliance refers to the cross-functional authentication and authorisation infrastructure (AAI). This kind of AAI then enables users to access various services offered by different organisations without the need for the user to create an account with each different organisation. Users authenticate themselves at their home organisation; the service provider is informed through assertions as standardised for instance in SAML. In Germany, the largest AAI federation is the national research network DFN-AAI.

Learn more about the benefits of a federation and how DAASI International can support you.

FIDO2 is a standard for online authentication, which resulted from a cooperation between the FIDO Alliance and W3C. The acronym FIDO stands for Fast IDentity Online. It combines the standards CTAP, formerly U2F, and WebAuthn. Originally, its predecessor FIDO v.1.0. was released in late 2014, it was a network protocol for the previously mentioned U2F, Universal Second Factor.

With FIDO2 it is possible to supplement password based authentication, or even entirely replace the password. Usually, a hardware token (e.g. Yubikey or Nitrokey) is used to this end. After the web service requests it, the hardware token only needs to be tipped on to physically prove one’s identity. As an alternative to the physical token, it is also possible to use built-in security features of smartphones or laptops for FIDO based authentication. Thus, biometric information by means of fingerprint scan or facial recognition, can be used as the second factor, or the password alternative.

Informative graphic about FIDO2
Grafik von Yubico

The mechanism behind FIDO2 is based on the principles of asymmetrical encryption. Here, one should distinguish between two steps: at first, a token is connected to a server as part of the so-called enrolment process, so it can be used again in the future. In this process, initiated by the client (usually the web browser), a one-time valid challenge key is sent from the server (“Relying Party”) via the client to the token. The token then generates a public and a private key, of which only the public key is sent back to the server.

For the actual authentication, the server sends back yet another one-time valid challenge key, which is cryptographically signed by the private key generated in the enrolment process. Finally, the server can verify the signature with the deposited public key.

Grid Computing is a form of distributed, decentralised data processing. For this purpose, various computers join in a kind of virtual super computer. Thanks to the combined computing power such a virtual computer is able to execute the most complex calculations and operations in a more efficient and faster manner than a single high-performance computer. In contrast to the basically similar functioning Cluster Computing, the individual computers of the Grid can be geographically randomly located and are only loosely intertwined. Accordingly, the Grid provides a reliable hardware and software infrastructure for processing high performance operations, while being simple and individually scalable according to user requirements. Further advantages of Grid Computing compared to conventional Cluster Computing are the utilised open, standardised protocols and interfaces as well as the fact that a Grid can be formed by standard computers that are significantly more cost-effective than multi-processor super computers.

Basically Grid is the predecessor of the now prevalent Cloud Computing. Grids combine resources that are not subordinated to a central authority, so they are formed across domains, organisations and countries. The resulting mergers are used by so-called Virtual Organisations (VOs).

A Virtual Organisation is understood as a permanent or temporally limited consortium of geographically spread individuals, groups, organisational units or whole organisations, that combine parts of their physical or logical resources and services, their knowledge and competences as well as parts of their information basis, so that common goals can be achieved (cf. Foster, C. Kesselman, and S. Tuecke, “The anatomy of the grid: Enabling scalable virtual organisations,” International Journal of High Performance Computing Applications, vol. 15, no. 3, pp. 200–222, Aug. 2001).

The D-Grid Initiative, funded by the German Federal Ministry of Education and Research, was the national Grid initiative of Germany that wanted to establish a sustainable Grid Computing infrastructure for research and development in the academic and industrial field in Germany as part of numerous projects. TextGrid, the only humanities project that belonged to the D-Grid initiative, was the most sustained project in this initiative, since it became part of DARIAH-DE and now CLARIAH-DE.

Identity Management (IdM) refers to the application of dedicated IT technologies to manage information about the identity of users and their access to resources via group memberships or role occupancies (e.g.for access to internal services of companies or organisations). The goal of Identity Management is to simultaneously increase the efficiency and security while reducing the costs for the administration of users and their identities, attributes and authorisations (cf. Spencer C. Lee: „An Introduction to Identity Management“).

Most of administrative expenses that occur in conventional user management within companies or organisations are preventable. They arise in the first place because a single user can cause multiple administrative efforts if they are administered separately in different databases and/or sectors of the organisation. Reset of forgotten passwords adds to the administrative workload. In larger organisations such redundant administration efforts will quickly grow into unnecessarily complex administrative processes. (Figure 1 shows an example for this scenario in an exemplary organisation)

Figure 1:

Redundant administration structures result in an increased workload and inconsistent data. Additionally, the user has to invest more time and work; for every single service the user has to apply for an account and remember the respective password. In case of information changes (e.g. name change due to marriage or change in address due to moving), all data administrators must be notified individually, which obviously does not happen all too often. If an employee terminates their employment, the information in all databases must be deleted or disabled. For this purpose, the user frequently has to complete a lengthy rotation sheet.

If for example a student takes on the position of a research assistant at the university, the student must be integrated into the employees’ registry in addition to being registered as a student with a respective set of data. The system would never be able to detect that the separate sets of data belong to the same person.

Infographic: unecessarily complex data management without an identity management system

Figure 2:

Due to the technologies used in Identity Management, single systems are connected in a centralised database. This can for instance be a so-called metadirectory, which is implemented with directory service technologies (e.g. OpenLDAP). Subsequently, existing redundancies are eliminated and the workload is minimised. (cf. Figure 2) Source databases previously defined as authoritative are synchronised with the metadirectory which then provides the respective application with the necessary data. (orange dotted arrows)

This means only a single source database has to be adjusted when changes occur. The metadirectory can also directly manage account-specific information, besides login name and password, attributes like home directory and assigned disc space. Overall, IdM ensures that all applications always work with up-to-date data. Additionally, the user is significantly relieved in their workload. Now, the user only has to contact the administrator of the respective source system (in our example either the employees’ administration or per the self-service desk, orange arrows). Also, the administration is relieved as information only has to be entered and maintained in one spot. Consequently, the users only have one password for all applications (also called unified login), therefore they are less likely to forget their password which means the help desk has to reset passwords only on rare occasions.

Combined with modern access management technologies like SAML or OIDC, a so-called single sign-on (SSO) can be implemented, so the user has to authenticate only once a day and is automatically authenticated for all connected systems.

Infographic: data management simplified with identity management using a centralised metadirectory

Besides directory services, various other technologies are used in Identity management, for example PKI-based SSL and TLS for secured communication between the different components, RBAC or XACML for an efficient administration of access policies, Kerberos, SAML (e.g.  Shibboleth) and  OIDC for Single Sign-On functionality, or SPML for provisioning of target systems like directories, databases and applications.

An extension of Identity Management is the Federated Identity Management (FIdM), which extends IdM beyond a single organisation. In this case, multiple organisations join in a federation to grant each other’s users access to their respective services. In principle, they use the same technologies to do so. In this context, particularly the SAML-standard or rather its implementations Shibboleth and simpleSAML.php become relevant.

Here you can find an overview of the advantages of Identity Management as well as of the services and products of DAASI International.

Further information:

Spencer C. Lee: An Introduction to Identity Management (PDF)

Peter Gietz: Identity Management an deutschen Hochschulen (PDF)

Kerberos refers to an authentication protocol for computer networks. Kerberos works on the basis of tickets it allows for secure authentication in the network and thus also supports Single Sign-On. It is an essential technology in Microsoft Active Directory.

LDAP stands for Lightweight Directory Access Protocol and is the successor technology of X.500. It is like HTTP a network protocol standardised by IETF and used for the access to directory services, including authentication operations (see RFC 4510-4519). The associated hierarchical data model and even commonly used schema is specified in the LDAP standard as well. LDAP has established itself especially in identity management and for the implementation of authentication services. With LDAP, any other information can be administered in the network thanks to the flexibly extendable data model.

 

Multi-factor authentication (MFA) is a security mechanism that secures access to digital systems, services or data by using multiple authentication methods. In contrast to the conventional standard authentication process, in which only one factor is requested (usually a password), MFA requires at least two or more of the following factors:

1. Knowledge factor: something the user* knows, such as a password, a PIN or an answer to a secret question.
2. Possession factor: something that only the user possesses, such as a smartphone, hardware token or smartcard
3. Inherence factor: Something that is unique to the user, such as their fingerprint, iris or other biometric features.

Because potential attackers would not only need to know the user’s password, but would also need to have physical access to the inherence factor and/or replicate their biometric features, MFA significantly increases security against unauthorised access by third parties. MFA is used in particular where especially sensitive data is involved, for example in online banking, online trading or eGovernment. As attacks, for example via ransomware, are becoming more frequent, MFA is increasingly being introduced for prevention and is also required for more and more applications.

A good open source implementation of MFA is eduMFA.

OAuth2 replaces OAuth 1.0 which struggled with significant security issues. The open protocol is currently used for the authorisation of resources on the internet and is based on established standards like HTTP, TLS and JSON. Clients do not have to provide sensitive information of users anymore, but the Authorization Server (AS) of the resource provides them with an access token, which clients show to the resource to access the contents.

The access token expires after a certain amount of time. Usually, the AS issues the access token together with a refresh token. When the validity of the access token expires, an additional access token can be obtained through the AS by using the refresh token.

Processes include:

  • Web Server: access through the browser to the server (client) and the resource
  • User Agent: for clients (e.g. JavaScript) in the browser
  • Native Application: for programs in stationary or mobile devices in combination with a browser
  • Device: for devices without keyboard. Entries are placed from a second computer
  • Client Credentials: for secure client service connections without user interaction

In any case (except Client Credentials) the user confirms that the client software is authorised to access a service or data.

Every authentication that is validating the identity of the user triggers a reference to external mechanisms. One possible type of tokens are Bearer Tokens: the one who owns the token has all necessary rights and does not need to identify himself through other means, for example server certificates.

Bearer Tokens are the most common type of token. In the context of authentication the use of OpenID Connect is highly recommended as it was specified as authentiucation layer on top of OAuth2. Nonetheless, other methods such as SAML assertions can also be used in this context.

A number of large service providers are already working with the protocol, for example Facebook, Google, GitHub, Microsoft, bitly, i.a.

Further information:

IETF sites for standardisation of OAuth2

Open source software is software, the source text (programme code) of which is publicly accessible. Open source software has several significant advantages over commercial products:

  • As everyone can continue writing their own copy of the software (called fork), the customer is independent of a specific developer or vendor and can flexibly steer further development.
  • Almost all open source products come with different options for commercial support offered by different companies. This leaves the customer with a free choice in this matter as well and relieves from the burden of software maintenance and programming new features.
  • A global and independent developer community works together on the product, so that improvements and extensions can quickly be implemented and possible security gaps can be detected and closed earlier.
  • Usually, open source products are free of license fees.

Therefore, open source software is safer, more efficient and more flexible than proprietary „closed source“ software, for which further development is dependent on the road map of a company which primarily pursues its own commercial interests.

In a more narrow sense, open source software must be published with a license certified by the Open Source Initiative (OSI). The criteria of the OSI are oriented towards the open source definition, which goes far beyond the availability of the source text.

A fundamental difference of open source licenses is whether a viral effect (also called copyleft) regarding the publication is forced or not:

  • Licenses without copyleft effect are characterised by the fact, that they grant the licensee all freedoms of an open source license and that they do not contain any conditions concerning the license type to use for modifications of the software. Therefore, the licensee can disseminate the changed software version under any desired license conditions, so he or she can also transfer it into a proprietary software.
  • For licenses with a copyleft effect, the licensee is obligated to only disseminate works that are derived from the original software under the conditions of the license of origin.

 

Further information:
Here you can find the website of the
Open Source Initiative.

The protocol OpenID Connect (OIDC) is completely based on OAuth2 and adds an authentication layer. To activate the add-on, the scope “openid” must be entered when requesting the OAuth2 token. Thereby, the authorisation server issues in addition to the access token (OAuth2 token type: „Bearer“) an ID token for the client, which contains information about the identity of the user. Additionally, further attributes (“claims”) about the user can be obtained at the UserInfo endpoint.

In general, OpenID Connect targets similar applications as SAML. However, besides the three entities, user (=user agent), identity provider (=authorisation server), and service provider (=relying party), an OpenID provider is included as the fourth entity, which leads to a more complex communication profile.

In detail, the standard includes specifications about:

  • Basic Client Profilespecification of the minimal support of the general OAuth2 protocol of a web-based service provider (relying party), that supports.
  • Implicit Client Profilespecification of the minimal support of the implicit OAuth2 protocol of a web-based service provider (relying party).
  • Discovery(optional) specifies how user and provider information (endpoints) can be found.
  • Dynamic Registration(optional) specifies how the service provider can dynamically register with OpenID providers.
  • StandardThe complete HTTP bindings specification for service providers and OpenID providers.
  • Messagesspecification of all messages that are used within OpenID Connect.
  • Session Management(optional) specifies how sessions can be managed in OpenID Connect.
  • OAuth 2.0 Multiple Response Types – specifies the OAuth2 return types.

OpenID Connect is the successor of the protocol OpenID, which is less secure and not compatible with OAuth2. The OpenID Connect specifications include different profiles and are promoted by the OpenID Foundation (OIDF).

Even though the standard is complex and does not solve several issues which SAML already solved – particularly the position of trust between the respective entities, it is supported by many big players like Google, Microsoft, PayPal, Ping Identity, Symantec, Verizon, Yahoo, Facebook, Intel and many others. This is why it is expected that the standard will establish itself at least in the commercial (social network) world. DAASI International uses OIDC and OAuth2 to integrate them with SAML-based infrastructures. It also provides several transprotocol SSO solutions, where the user can authenticate with SAML and then is authenticated also for OIDC compliant applications and vice versa.

Further information:

OpenLDAP is a reference implementation of LDAP (Lightweight Directory Access Protocol) that enables the request and modification of data which is provided by directory services via a dedicated network protocol. Database systems that are based on OpenLDAP are independent of platforms, hierarchically organised, standardised and can be centrally administrated.

Already when OpenLDAP was first developed, the focus was on scalability and performance which could be dramatically enhanced within the last years of development by providing the new data backend mdb, which is also widely used by NOSQL databases. According to the head developer Howard Chu, the current version is able to process billions of objects in databases as big as a few terabytes. Even in such large deployments that over 100,000 queries per second with latency periods below one millisecond can be processed without any issues. Thus even during high workload situations, the software runs smoothly. If the software does crash, it is most likely due to a hardware malfunction rather than due to the software itself. (cf. Article in the Admin-Magazin)

Logo: OpenLDAP

Features and Standards supported by OpenLDAP

OpenLDAP has the following features:

  • high-performance system that allows at least more than 50,000 read accesses per second on corresponding hardware even with large amounts of data (over 1 million entries) if the attributes of the search filters are indexed
  • highly fail-safe multimaster clusteringa server can use multiple database back ends, which can be located on local hard disks or on SAN file systems
  • multi-client capability is achieved through ACL configuration
  • maximum number of open LDAP connections can be configured
  • completely LDAP v3 compatible and can therefore be addressed by all LDAPv3 supporting clients
  • Kerberos V based authentication (via SASL/GSSAPI)
  • SSL/TLS via START_TLS operation or via LDAPS
  • replication via Syncrepl protocol (RFC 4533)
  • configuration via LDAP data (subtree cn=config) so that configuration changes do not require a server restart
  • Pass-Through-Authentication via SASL PLAIN

OpenLDAP Included Contents

In addition to the server, the software package also includes other tools and necessary libraries. It consists mainly of the following components:

  • slapd – stand-alone LDAP daemon
  • back endswhere the actual access to data happens
  • overlays – allow to modify the behaviour of the back ends and thus the slapd without changing this/them oneself
  • syncrepl – synchronisation and replication according to RFC 4533
  • client libraries that provide the LDAP protocol
  • complete documentation and manpages
  • tools, aids and examples

Further information:

Here is the homepage of the OpenLDAP project.

Public Key Infrastructure. The so-called asymmetric encryption technology gained ubiquitous relevance in the context of data protection and authentication as well as secure communication on the internet. This technology enables encryption of documents without previous exchange of secret keys, which usually has to happen in symmetric processes. A public key is used for encryption, which is mathematically related to a private, secured key, which the addressee uses for decryption.

A certificate is a public key, which belongs to a specific person which is validated by a reliable third patry institution (Certification Authority, simply called CA). Other than encryption, the very same technology also enables digitally signing documents as well as the verification of document authenticity. Related technologies are X.509, S/MIME, SSL and PGP.

A PKI (Public Key Infrastructure) is a hierarchical infrastructure for generating certificates for users and services. Thereby, a trust chain is built which is cryptographically secured. The following terms are relevant here:

  • Public Key: The public key can be seen by everyone and is usually published. Messages for a particular recipient can be encrypted or the signature of a sender can be verified with the public key. When the public key is digitally signed by a trustworthy place, it is called a certificate. There are several types of certificates that are marked through their purpose. The three main types are user certificates, server certificates and CA certificates.
  • Private Key: The private key is only accessible for the person for whom the certificate is issued or the person who is authorised to use the certificate (e.g. the administrator of a website). For user certificates this is the person him- or herself. In general, the private key is protected at least by a password.
  • RA (Registration Authority): By means of an RA, the personal details of a person get verified or rather it is ensured that a server certificate is requested by an authorised person. If the verification is completed, a CSR (Certificate Signing Request) is issued and forwarded to the responsible CA.
  • CA (Certificate Authority): A CA is a trustworthy institution which reacts to the CSRs and generates the certificate through digitally signing the public key. A CA can publish all certificates that are signed by it in public directories, for example in an LDAP server. In addition, the CA is responsible for the blocking of certificates, the private keys of which have been compromised and to respectively publish the blocking list of the so-called CRL (Certificate Revocation List). This can also be reached via a query service, which speaks the Online Certificate Status Protocol (OCSP). Within one PKI, several CAs can be established that are authorised to sign the CSRs. Here, CAs can be arranged hierarchically; subsequently, the higher CA signs the signing key of the lower CA, resulting in a verifiable trust chain.
  • PCA (Policy Certification Authority): This entity within a PKI exists as a top CA of a CA hierarchy only once. The PCA defines the guidelines according to which the identities are verified and the certificates are generated and signed. These guidelines are published in Certificate Policies (CP) and Certification Practice Statement (CPS). The measures for the security of private keys are specified in detail in the CP and CPS.

Establishing a PKI can solve multiple issues: The identity of a user or service can be verified, data is encrypted while transferred, the integrity of data protected, and finally documents or emails can be signed digitally.

Despite all these advantages of a PKI, there are also disadvantages; firstly, it is the task of the PKI’s company, especially of the RAs, in which reliable employees with technical understanding have personal contact to the user, to verify the identity and to rudimentarily explain the system, which amounts to considerable costs. The weakest part of the trust chain are the users of the certificates: If an assailant is able to get hold of the private key of a user, he or she can appear in the name of the user, conducting so called identity theft. If an assailant is able to decipher the private key of a CA, he or she can randomly issue and share valid certificates within a PKI to install them for example in web servers, that infect computers with malware. Since certificates only have a limited validity (in many PKIs only one year) a quite complex process for renewal of a certificates needs to be executed by the user. Therefore, the training and creating of awareness of the users is a key task for every company employing a PKI.

In the RBAC model (Role Based Access Control), users of a system are assigned certain roles. These roles entail different levels of access rights to specific resources. For example, a role “department manager” could provide reading and writing rights for a resource, whereas the role “secretary” only reading rights. Hence, administration is simplified due to rights being assigned along with the respective role of a user. In order to alter a user’s rights, user roles can be added or deleted.

The difference between groups and the in RBAC used roles is less the technical implementation than the organisational use. Thus one could differentiate them as follows:

  • Role is a characteristic that determines specific behavioral rules and patterns and is related to specific rights and duties. These are in particular functions within an organisation. Roles can be hierarchically structured, so that the role secretary if located in an organisational unit would only give access rights for the identities in this department
  • Group is a much more general concept. The match of only one arbitrary characteristic is enough to define a group, for example „subscriber of mailing list X“.

DAASI International implemented the RBAC standard in their IAm software didmos. Originally, it was used in the module “Decision Point” (longterm clients know it as OpenRBAC). didmos1 Decision Point stores all necessary information (about users, roles, resources, etc.) in an LDAP server, which can speed up authorisation decisions as they can be displayed through an LDAP filter. In didmos 2, the Decision Point was completely rewritten and is now part of didmos 2 Core, so that all didmos 2 modules can use it for authorisation decisions. Since it provides a comprehensive REST API, it can also be used by any other application. In one of the next versions of didmos2, a comprehensive web based administrationinterface for the Decision Point will be provided.

didmos1 Decision Point can be integrated even in already existing systems with directory services, as the directory set-up is completely customisable. The client can decide which data should be stored in which place and in which directory. didmos1 Decision Point is also able to function in accordance with preexisting user administration structures. Applications can then access didmos Decision Point via different interfaces (SOAP, REST, PHP-API, SAML/XACML Check Access), so that the software can be flexibly integrated into a multitude of IT landscapes.

The software was developed within the framework of a graduate thesis which was supervised by DAASI International. DAASI International continues to support and advance the software as ready-for-use RBAC implementation. It is fully open source and published under LGPL-license.

The software was already applied, further developed and adapted to individual requirements in commercial and non-commercial projects. Examples for non-commercial projects are the BMBF research projects, TextGrid, and DARIAH.

 

Further information:

REST (Representational State Transfer) is a normative architecture style for web services, which is used as an alternative to SOAP-based web services. Basically the rather complex XML based workload and the complex transfer protocol is replaced in REST by simple JSPON and using HTTP directly as transport protocol.

 

The Security Assertion Markup Language is an XML-based standard for the exchange of information on authentication, authorisation and user attributes. This exchange occurs between different security domains, more precisely between:

  • an Identity Provider (IdP) or Security Token Service (STS), which issues the information about identity and authorisations of a user, and
  • a Service Provider (SP) or Relying Party (RP), that receives the information and decides based on this information whether access to protected services is granted
  • In some cases a Discovery Service (DS) or Where-Are-You-From-Server (WAYF) is added so the user can select their home IdP.
Logo: SAML

IdPs and SPs are in a position of trust (federation) to each other based on a list of server certificates of all Sps and IdPs belonging to the same federation. Therefore, users who belong to security domain A can access the services in a security domain B, without having their own account with B. This so-called “federated identity” is a characteristic of a SAML-based trust association.

Another central function of SAML is the so-called web single sign-on (Web-SSO). After authenticating once at the IdP, the user can access different services that require authenticated access in a federation without reentering credentials (see also → AAI). Authentication information that are issued at the beginning of a session (SAML Assertions) within a single federation stay valid. The IdP automatically issues the information for any further SP the services of which a user might wish to access.

SAML offers an open, easily adjustable, and highly interoperable protocol which quickly became the definite standard for web SSO solutions.

SAML is a well established technology these days and there are at least 144 implementations . Noteworthy among the Open Source implementations are:

  • Shibboleth as one of the most comprehensive SAML implementations based on the library OpenSAML
  • SimpleSAMLphp which allows for easy integration of PHP based applications
  • SATOSA based on the library pySAML

Also, commercial providers like Sun, PingIdentity or Microsoft implemented SAML in their products.

Further information:

Simple Authentication and Security Layer (SASL) is a standard framework for authentication and data security on the internet. By decoupling authentication mechanisms from application protocols, SASL allows for different authentication options for different protocols.

 

Security in the context of IT includes a wide range of important tasks. Security begins with the development of hardware and the physical protection, and ends with the responsibility of every individual user who uses a PC, a tablet or a smartphone.

Especially large companies with a rapidly growing amount of data face the challenge to

  • securely store the data,
  • ensure only authorised personnel has access to said data,
  • securely transfer the data from one system to another,
  • incorporate data into backups, as well as
  • easily erase all data after usage, i.e. for legal reasons (data protection laws) if necessary.

The technical means exist and should be universally applied: Anti-virus software, PKI, encoding processes, identity management, etc. Moreover, all users must be sensitised as their behavior substantially affects security efforts.

Further information:

Self-Sovereign Identity (SSI) is a modern technology which allows for persons and organisations to sovereignly manage their identity data. This approach of the “user-centred identity” is the opposite to the model of “organisation-centred identity”. With user-centred identity, individuals maintain their own identities as opposed to an organisation (i.e. employer, external ID providers such as Google and Facebook). This increases levels of sovereignty as well as privacy.

Technically speaking, SSI is implemented with blockchain technology which essentially is a maximum redundant storing of data which does not require trust to the storing server. Thus users are able to maintain their own personal information, including digital credentials which have been issued by other organisations such as a driver’s licence, professional recommendations, qualifications, etc.

In a so-called “wallet” data is stored and passed on to others in a controlled manner. On the other hand, the receivers of the information can have them verified within the blockchain, and do not need to store the data themselves. Wallets are not limited to personal use but can also be used by organisations.

In the future, SSI is going to become more important especially in regards to digitisation of the society. Norbert Pohlmann wrote a (German) highly recommendable and very detailed article to explain the technology.

In Europe, the Gaia-X project seeks to create an open, secure and trustworthy ecosystem on the basis of SSI. Together with Vereign AG, DAASI International explicitly supports this endeavour in the context of the Gaia-X Federation Services (GXFS).

Shibboleth® is an open source product based on the OASIS standard SAML (Security Assertion Markup Language) for shared cross-functional authentication and authorisation for web applications (Federated Identity Management). Shibboleth was developed within Internet2 a non-for-profit US computer networking consortium driven by higher education institutions. Today, there is the Shibboleth Consortium consisting most importantly of Internet2, the Swiss national research network SWITCH and the British equivalent JISC and Nordunet, a collaboration of 5 Scandinavian research networks, which warrants the further sustainable development. Most other research networks and a number of universities are members, as well as three commercial members, one of which is DAASI International.

Logo: Shibboleth

The authentication of a user (identification) takes place in the beginning of a session at his or her home organisation (identity provider). The resource provider (service provider) is contractually committed to trust the identity provider and thus relies on the assurance (assertions) of the identity provider.

After successful authentication, the session with the identity provider is recorded as a cookie in the browser of the user, so that they have to log in only once. For each further service provider whose services the user wishes to access the identity provider automatically issues a new assurance. Thereby, the user is authenticated for all services in the trust federation for a certain period of time due to their one-time authentication (single sign-on, SSO). Because of this useful function, Shibboleth is also applied independent of cross-functional federations to implement SSO for web applications within organisations.

Shibboleth comprises a whole set of software products:

  • identity provider
  • service provider
  • two different services for locating the identity provider of the user: Centralized Discovery Service and Embedded Discovery Service
  • Metadata Aggregator for administration of federation meta data
  • OpenSAML: libraries for SAML in C++ and Java

Further information:

Single sign-on (SSO) defines the authentication process in which the user has to log in only once to use different services, for which he or she would need multiple accounts and go through multiple different login processes otherwise.

SMTP-Auth is a protocol for the authentication of mail servers which uses the user name and password of the client.

SOAP (Simple Object Access Protocol) is an XML-based standard protocol for exchanging data on the internet or within a network.

 

SPML (Service Provisioning Markup Language) is an XML framework developed by the OASIS consortium for provisioning users, resources and service information within but also across organisation borders.

SPML defines provisioning as well as “the automation of all the steps required to manage (setup, amend & revoke) user or system access and entitlement rights to electronic services.“ (cf.”An Introduction to the Provisioning Services Technical Committee“)

The open standard ensures a high interoperability between different systems but defines only the exchange format, not the format of the transported data. To improve the interoperability of diverse systems, the DSMLv2 profile for SPML and the SAML2.0 profile for SPML are defined.

DSMLv2 was especially developed for directory service data and outlines an open and yet extensible format for provisioning due to the combination with SPML.

In Federated Identity Management (FIdM) with SAML, SPML can be used to initiate provisioning and de-provisioning processes from identity provider to service provider. Therefore, SAML assertions can also be transported within SPML messages and can be used to qualify/identify the goal of a provisioning request.

As part of an identity management system, SPML offers the possibility to standardise the provisioning of existing systems and to facilitate the provisioning of future systems. Because if different proprietary solutions are applied instead of SPML, user interfaces are likely to be much more difficult to interconnect.

The SPML standard was published in 2000 in the version 1.0, in 2006 the version 2.0 followed.

TLS-KDH is a still developing protocol for highly secure authentication and transport encryption, which combines the strengths of Kerberos, Diffie-Hellman and TLS. This protocol is supposed to enable secure authentication even with new challenges, resulting i.e. from quantum computing.

The Components:

  • Kerberos: This protocol is well established – under Linux and Windows. It enables a secure authentication process, in which both parties (client and server) can authenticate each other using a Kerberos server.
  • TLS, Transport Layer Security allows for asymmetric encryption of transport protocols, such as HTTPS, LDAPS, etc., based on the X.509 standard.
  • DH, Diffie-Hellman is a method for secure key exchange, which also supports so-called Perfect Forward Secrecy (PFS). With PFS even when a long term key is leaked, it is impossible to derive any information in regards to the session keys as they are negotiated at short intervals and are responsible for the actual transport encryption.

In TLS-KDH, these three protocols are used together using PFS, whereby a previously unattained level of security can be achieved. For the use case of authentication, the general mode of operation is based on the use of client certificates in TLS. The only difference being that in TLS-KDH Kerberos tickets are used instead of X509 client certificates.

Since TLS-KDH is still very new, there is no implementation of an actual use case yet. In the EU funded NGI Pointer project TA4NGI, DAASI International is working on an implementation for the authentication proxy SATOSA.

WSDL is an abbreviation for Web Services Description Language, a standard language for web services based on XML.

The Xtensible Access Control Markup Language (XACML) is a standardisation schema implemented in XML for the presentation and processing of authorisation guidelines (policies). While the XML standard SAML can regulate the exchange of this information, the XACML defines its syntax and the mode of analysis.

In detail, XACML contains a reference architecture that includes all components involved in the authorisation process; a language for formulation of authorisation policies; a language for formulation of authorisation requests (authorisation decisions) and respective responses; a processing model as well as standard attributes and functions (arithmetic functions, Boolean operators, etc.).

The logical separation of the Policy Enforcement Point (PEP) and the Policy Decision Point (PDP) is a central aspect in the operating principle of XACML. An attempt to access a protected resource triggers a resistance in an application-specific PEP first; the PEP will then collect all information about the requesting entity and convert it into an authorisation request which is sent as a XACML document to the PDP. In contrast to the application-specific PEP, the completely standardised PDP decides about the access upon received information and predefined rules in XACML, and sends it back to the PEP. According to this decision, the PEP subsequently grants or refuses the access.

The standard developed by the OASIS consortium is available in version 2.0 since 2003, the first concept for version 3.0 was published in April 2009.

XACML already contains profiles for SAML and RBAC, which allows an easy integration of these standards.

Different implementations are used, i.e. SunXACML (free), XACML Enterprise (free, declared as the fastest implementation in 2008), HERAS-AF (open source), and Axiomatics (commercial, implements the first concept of XACML 3.0).

XML (eXtensible Markup Language) is general syntax for formulating markup languages, with which hierarchically structured data sets are displayed in the form of human and machine readable text data. The XML specification defines the rules under which the data is captured in XML. Although XML was primarily developed for storing and processing documents, nearly any data structures can be displayed. For this reason XML is applied among others, in the fields of web-services and Identity Management (IdM). XML is a meta language that helps to define restrictions of structural and textual nature in an application-specific language. Examples for XML-based languages are RSS, SOAP, XHTML, OpenDocument, but also the IdM schemas DSML, SAML, XACML and SPML.

XML is a successor of the older SGML (Standard Generalized Markup Language, ISO 8879:1986) and was particularly tailored to the utilisation on the internet. A valid XML document consists of an optional declaration, elements and attributes. The meaning of elements and attributes is not standardised, the interpretation of information is completely allocated to the particular processing application.

Documents can be well composed, as in they meet the syntactic guidelines of the XML specification, or can additionally be validated by a schema which further describes their structure. Examples for schema languages are DTD, XML schema and RELAX NG.

On the basis of the XML specification, several corresponding standards are:

  • XML Namespacehelps to distinguish different vocabularies through XML documents, prevents doubled element names
  • XML DOM Document Object Model – a way to evaluate and manipulate XML documents
  • XSLT eXtensible Stylesheet Language Transformations – an XML-based language to transform XML documents into other formats (XML-based and not XML-based)
  • XPath – serves as navigation tool in elements and attributes within an XML document
  • XQuery – serves as a programming language to access or evaluate XML documents
  • XLink / XPointer – standards for the referencing whole XML documents or parts of them as a hyper link in an XML document
  • XSL-FO Extensible Stylesheet Language Formatting Objects – serves for the formatting of XML files for monitor, paper or other media
  • XForms – standard for generating forms, successor format for easy HTML forms
  • XML Signature – digital signature for XML documents
  • XML Encryption – syntax and processing rules for the encryption of XML contents

The first specification of the XML standard passed by the World Wide Web Consortium (W3C) was published in 1998. The current and fifth version of the specification 1.0 is available since November 2008. XML is free of license, independent of any platform and individually adjustable.

Further information:

keyboard_arrow_up
WordPress Cookie Plugin by Real Cookie Banner