API Keys service | On‑Premise | Urbi Documentation
On‑Premise

Installing API Keys service

Important note:

All passwords and keys in this section are given for illustration purposes.

During a real installation, it is recommended to use more complex and reliable passwords.

  1. Consider getting familiar with:

  2. Make sure the necessary preparation steps are completed:

    1. Preparation for installation
    2. Fetching installation artifacts
  3. Collect the necessary information that was set or retrieved on previous steps:

    Object Example value How to get value
    Docker Registry mirror endpoint docker.storage.example.local:5000 See Fetching installation artifacts
    Kubernetes secret for accessing Docker Registry onpremise-registry-creds See Fetching installation artifacts
    Installation artifacts S3 storage domain name artifacts.example.com See Fetching installation artifacts
    Bucket name for installation artifacts onpremise-artifacts See Fetching installation artifacts
    Installation artifacts access key AKIAIOSFODNN7EXAMPLE See Fetching installation artifacts
    Installation artifacts secret key wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY See Fetching installation artifacts
    Path to the manifest file manifests/1640661259.json See Fetching installation artifacts
  4. Make sure that the resource requirements specified in the Helm chart are met. For more information on how to do this, refer to the System requirements document.

    * These storage requirements may vary depending on the configured statistics storage time period. The greater this period is, the more storage space is required.

    Note

    Contents of the Helm chart described in this chapter are relevant for the latest On-Premise version (see Release notes). To find parameters for earlier versions, open values.yaml on GitHub and enter the required version number (for example, 1.18.0) in the tag switcher on the left.

  5. Choose the domain names for the services.

    Example: keys.example.com

Place a PostgreSQL cluster with the domain name keys-postgresql.storage.example.local in the private network. This instruction assumes that the cluster works on the standard port 5432.

Configure the PostgreSQL cluster for usage as a storage:

  1. Connect to the cluster a superuser (usually postgres).

  2. Create two database users that will be used for the service. Set passwords for the users.

    create user keys_superuser_rw password 'KEYS_Db_Owner_Password_1234';
    create user keys_user_ro password 'KEYS_Db_RO_User_Password_5678';
    
  3. Create a database owned by one of the users.

    create database onpremise_keys owner keys_superuser_rw;
    
  4. Grant limited permissions to the database for the other user.

    \c onpremise_keys
    
    ALTER DEFAULT PRIVILEGES FOR ROLE keys_superuser_rw IN SCHEMA public GRANT SELECT ON TABLES TO keys_user_ro;
    ALTER DEFAULT PRIVILEGES FOR ROLE keys_superuser_rw IN SCHEMA public GRANT SELECT ON SEQUENCES TO keys_user_ro;
    

For the servers to be able to authenticate API Keys service administrators, it is recommended to use a LDAP server (e.g., Microsoft Active Directory). This step can be skipped if you cannot deploy a LDAP server and are going to use authentication based on plaintext password in the configuration file.

Place a LDAP server with the domain name keys-ldap.storage.example.local in the private network. This instruction assumes that the cluster works on the standard port 3268.

  1. Collect the necessary LDAP settings.

    Setting Example value
    LDAP service username keys_ldap_user
    LDAP service password KEYS_LDAP_PaSSw0rd_8901
    Base relative distinguished name for performing search in the LDAP catalog dc=2gis
    LDAP filter for identifying entries in the search requests (&(objectClass=user)(sAMAccountName=%s))
  2. Add a LDAP user named admin, which will be granted the admin role in the API Keys service.

  1. Create a Helm configuration file. See here for more details on the available settings.

    The example is prefilled with the necessary data collected on previous steps.

    values-keys.yaml
    dgctlDockerRegistry: docker.storage.example.local:5000
    
    dgctlStorage:
        host: artifacts.storage.example.local:443
        bucket: onpremise-artifacts
        accessKey: AKIAIOSFODNN7EXAMPLE
        secretKey: wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY
        manifest: manifests/1640661259.json
        secure: false
        region: ''
        verifySsl: true
    
    postgres:
        ro:
            host: keys-postgresql.storage.example.local
            port: 5432
            name: onpremise_keys
            username: keys_user_ro
            password: KEYS_Db_RO_User_Password_5678
        rw:
            host: keys-postgresql.storage.example.local
            port: 5432
            name: onpremise_keys
            username: keys_superuser_rw
            password: KEYS_Db_Owner_Password_1234
    
    ldap:
        host: ldap.keys.example.com
        port: 3268
    
        useStartTLS: false
        useLDAPS: false
        skipServerCertificateVerify: false
        serverName: ldap.keys.example.com
        clientCertificatePath: /home/user/certificates/cert.crt
        clientKeyPath: /home/user/certificates/cert.key
        rootCertificateAuthoritiesPath: /home/user/certificates/root.cer
    
        bind:
            dn: keys_ldap_user
            password: KEYS_LDAP_PaSSw0rd_8901
    
        search:
            baseDN: dc=2gis
            filter: (&(objectClass=user)(sAMAccountName=%s))
    
    tasker:
        resources:
            requests:
                cpu: 10m
                memory: 32Mi
            limits:
                cpu: 100m
                memory: 64Mi
    
        delay: 30s
    
    admin:
        host: https://keys.example.com
    
        ingress:
            enabled: true
            className: nginx
            hosts:
                - host: keys.example.com
                  paths:
                      - path: /
                        pathType: Prefix
            tls: []
            #- hosts:
            #  - keys.example.com
            #  secretName: secret.tls
    
    api:
        adminUsers: 'admin:8k7RVCP8m3AABDzD'
    
    customCAs:
        bundle: ''
        # bundle: |
        #   -----BEGIN CERTIFICATE-----
        #   ...
        #   -----END CERTIFICATE-----
        certsPath: ''
    

    Where:

    • dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.

    • dgctlStorage: Installation Artifacts Storage settings.

      • Fill in the common settings to access the storage: endpoint, bucket, and access credentials.
      • manifest: fill in the path to the manifest file in the manifests/1640661259.json format. This file contains the description of pieces of data that the service requires to operate. See Installation artifacts lifecycle.
      • secure: whether to use HTTPS for interacting with the S3 compatible storage. Default value: false.
      • region: S3 storage region.
      • verifySsl: whether to enable validation of SSL certificates when connecting to dgctlStorage.host via HTTP. Default value: true.
    • postgres: access settings for the PostgreSQL server.

      The API Keys service serves the data in two modes: read-only (ro) and read-write (rw). The service uses a single database for each mode, however, users are configured with different set of permissions (see the step 1 for details). Set the settings in these sections as follows:

      Configure:

      • Settings that are common for both modes:

        • host: hostname or IP address of the server.
        • port: listening port of the server.
        • name: database name.
      • Credentials of the read-only user (the ro section).

      • Credentials of the read-write user (the rw section).

      The Helm chart uses Kubernetes Secrets to store the password settings in the ro and rw sections.

    • ldap: access settings for the LDAP server.

      • host: hostname or IP address of the server.

      • port: listening port of the server.

      • A group of setting to configure secure access to the LDAP server:

        • useStartTLS: use StartTLS.
        • useLDAPS: use Secure LDAP.
        • skipServerCertificateVerify: do not verify the server certificate.
        • serverName: string with the server name. Used when verifying the server certificate.
        • clientCertificatePath: path to client certificate.
        • clientKeyPath: path to client key.
        • rootCertificateAuthoritiesPath: path to the root certificate authorities file.
      • bind: credentials for accessing the LDAP server.

        • dn: distinguished user name.
        • password: user password.
      • search: LDAP search settings.

    • tasker: settings of the Tasker service, which do administrative actions on API keys.

      • resources: computational resources settings for the service. To find out recommended resource values, see Computational resources.
      • delay: time interval (in seconds). This setting defines the interval of checking tasks that are related to delayed action items (for example, blocking an API key).
    • admin: settings of the API keys admin web service (Web UI).

      • host: URL of the API keys frontend. This URL should be accessible from the outside of your Kubernetes cluster, so that users in the private network can browse the URL.

      • ingress: configuration of the Ingress resource. Adapt it to your Ingress installation. The URL specified in the ingress.hosts.host parameter should be accessible from the outside of your Kubernetes cluster, so that users in the private network can browse the URL.

    • api.adminUsers: a list of credentials of administrator users in the username1:password1,username2:password2,... format.

      The Helm chart uses Kubernetes Secrets to store the setting.

      Note:

      If you have a LDAP server, it is recommended to use it for the authentication and skip the api.adminUsers setting.

    • customCAs: custom certificates settings.

      • bundle: text representation of a certificate in the X.509 PEM public-key format.
      • certsPath: bundle mount directory in the container.
  2. Deploy the service with Helm using the created values-keys.yaml configuration file:

    helm upgrade --install --version=1.31.0 --atomic --wait-for-jobs --values ./values-keys.yaml keys 2gis-on-premise/keys
    
  3. Add the administrator users to the deployed service via the keysctl utility. These users will be assigned the API Keys service administrator role.

    Important note:

    When using LDAP, it is sufficient to add a single user. When using credentials list (the api.adminUsers setting), add all the users form the list.

    To add a user, execute the following command from the inside of any keys-api pod:

    keysctl users add admin 'Keys Service Admin'
    

Each On-Premise service that integrates with the API Keys service is forced to share information about the end user's API key usage with the API keys backend by design. To communicate with the backend, the service needs the service token to be configured during the deployment.

To get a list of the service keys to be used with a certain On-Premise API Keys service deployment, execute the following command from the inside of any keys-api pod:

keysctl services

To test the operability of the API Keys service, do the following:

  1. Open the admin web interface in a browser (use the value of the admin.host setting from the values-keys.yaml file):

    https://keys.example.com/
    
  2. Log in using the administrator user credentials (the one that was granted administrator role via the keysctl utility). You should see the API Keys service web interface for managing API keys.

What's next?