Monday, September 26, 2016

Convenient git rebase while resolving merge conflicts


If you are in distributed development team, you've most probably run into merge conflict situation.
For example: your local changes conflict with recent repository updates.

Well, you can merge your local branch, resolve conflicts, and then push your changes upstream.


git checkout master
git merge myfix
git push

In gerrit workflow merge is effectively replaced with rebase operation:

git checkout myfix
git rebase origin/master
# resolve conflicts here
git rebase --continue

At this point you can be surprised by a need to do one more conflict resolve.

Omitting the reason of it, this is how I avoid this redundancy:

git checkout myfix
git rebase --onto origin/master HEAD^ HEAD
# resolve conflicts here
git rebase --continue

and this is it.

git review

Tuesday, August 16, 2016

Step-by-step client-server troubleshooting

Step-by-step client-server troubleshooting

Abstract

As a software developer I want to describe one scenario repeating itself with clockwork persistence:
Somebody received something unexpected from the client working with a server and don't want know what to do with that.

Well then, take my hand and follow the path...

What's happened?

Usually in client server architecture the only part the victim has at hand is a client: usually it's a library or CLI that just proxies requests for service via some evil magic. So if client lib/CLI says you something you shouldn't believe it. Remember - it's a magic!

All you can say looking at some error message instead of expected nice lines of data or a status code, encouraging you with 'everything seems OK', is that 'client has something not matching positive result'.

Here comes first precaution: make sure you're using the right version of client! I'm serious: even if you've just installed a shiny new environment from a single distributive, the latter can be spoiled, and you can't bet your life it's perfect. So just do it.

For a lot of people it's really surprising that returned by the client error message may mean something, so read it!

At this point you should be able to make a decision: if it's the server malfunctioning or you've passed something wrong to the client from the beginning. The chances of latter are very high if the problem has occurred suddenly.

Enemy unknown

Well, here is the point where you usually believe you've done all right. It's always good to be sure you are not the one to blame, but it doesn't solve the problem.

The next level of this dungeon is the logging system, so make sure both client and server (especially server) log detailed messages. You may want to check some configuration files, get some documentation, find it obsolete, figure something out, and finally - has the detailed debug logging enabled.

ALARM: make sure you are about to analyze the right logs! It's a common case when logs are "somewhere else".

  1. Find traces of your problem in the logs of your client (if you are using CLI - just look to the screen)
  2. Mark the time of the incident!
  3. Find that time in server logs and see what happened there

Where are my logs?!

Haha! Have you forgotten about any proxies in between? ;)

There is nothing in the logs!

Really? Are you sure you skipped nothing? Well, call for a dev then...

Thursday, May 26, 2016

Austing Summit summary for keystone

Austin OpenStack 2016 keystone design summit outcomes

Several weeks have passed since the days of the Summit, etherpads are read through several times, and the time to present the results is about a week ago already - now I'm ready to publish a digest of notable things from the event.

The main discussion topics were:

Well, let's begin. I'll try to be brief.

New features

Every feature to be implemented is assigned a person responsible and 2 champions from core-team to make sure it will not be forgotten.

Link to the source: https://etherpad.openstack.org/p/newton-keystone-new-features

Reseller

spec: http://specs.openstack.org/openstack/keystone-specs/specs/kilo/reseller.html

OpenStack needs to grow support for hierarchical ownership of objects. This enables the management of subsets of users and projects in a way that is much more comfortable for private clouds, besides giving to public cloud providers the option of reselling a piece of their cloud.

Backround: The 1st phase of Reseller landed in Mitaka. This means that domains are actually stored as a "top level project to a hierachy", with a special flag ("is_domain").

The overall goal of the 2nd phase is to allow a cloud provider to allow a reseller to resell part of their cloud, and that reseller to be able to onboard (and look after) their own customers (i.e. create domains for their own customers) without having the cloud provider invloved in such onboarding.

The proposal is therefore to extend our model of project hierachies to allow more than just the top project to act as a domain, and rather to allow the top levels of the hierarchy to act as domains. The parent of a domain is always another domain....no hierachies with a domain below a project. This builds on the url-safe support that landed in Mitaka, and https://review.openstack.org/#/c/310048/ (proposed for Newton) that allows auth using a url style qualifed path to a project. 

Henry Nash is planning to propose a spec specifically for this phase 2 of Reseller, that is to be done in Newton.

Unified delegation

spec: https://specs.openstack.org/openstack/keystone-specs/specs/backlog/unified-delegation.html 

Roles Assignments, OAuth tokens, and trusts all serve one single purpose:to delegate roles on the resource to the actor.Resource may be either project or domain, actor is a user or a group.The current architecture does not maintain a chain of responsibility fortracking what user origianlly created the role  assignment, nor does it haveany means to restrict its usage.  The trusts is more a workaroundrather than the sole solution for its own use case.A scoped token represents a short term delegation for performing theset of operations neceesaary to complete a single workflow.  Accesscontrol is performed by matching token contents against policycriteria and by validation of the token itself via keystone call.

It is necessary to unify all delegation API: assignment, trust and oauth1 in order to:

  • Allow a user with restricted access (non-admin user) to be able to perform delegations
  • Strengthen delegation responsibility tracking
  • Utilize transparent delegation model using any of the existing delegation API

We came to the agreement on implementation issues complicated with the restriction that current API should remain unchanged.

 

I was assigned to do the work, Adam Young and Samuel will be the champions.

Multi-factor auth

spec: https://review.openstack.org/#/c/272287/

When authenticating there are several ways a user can prove their identity,known as authentication factors. These authentication factors include theknowledge factor, possession factor, and the inherence factor. A password wouldbe an example of knowledge factor, since it is something the user knows. An RSAtoken would be an example of a possession factor as the user must have it intheir possession. A user's fingerprint would be an example of an inherencefactor since a biometric trait of the user. Authentication factors can besummarized as follows:

  • knowledge factor: something the user knows
  • possession factor: something the user has
  • inherence factor: something the user is

Combining two or more of these authentication factors is knows as multi-factorauthentication.  Keystone currently supports authentication based on theknowledge and possession factors.  There is no support for combiningauthentication factors in such a way that hardens the criteria for obtaining atoken.

This has proven to be a tricky one. Things required to implement MFA in keystone:

After some discussion, PCI-DSS is considered different from MFA because PCI buys some useful stuff for service users.

So we will continue implementing PCI-DSS because it's technically low-hanging fruit and David Stanek has some patches proposed from a long time ago.

Agreed not to do it in Newton and rather perform the following:

  • Document how to use MFA by using federation
  • Get the tokenless auth devstack patch working and merged
  • Finish encrypting the credential backend so that the TOTP implementation is finished

Work will be done by Werner, championed by Jamie Lennox and Guang Yee.

ldap3 driver

LDAP support dependent on python ldap library which isn't compatible with python 3. So new driver built using ldap3, allowing us to deprecate old ldap driver.

Work will be done by Roxana and Kristi, championed by Adam Young and Morgan Fainberg.

PCI-DSS support

spec: http://specs.openstack.org/openstack/keystone-specs/specs/keystone/newton/pci-dss.html

Payment Card Industry - Data Security Standard (PCI-DSS) v3.1 provides an industry standard for data security requirements and procedures. Although keystone deals with sensitive data (primarily passwords), it has not made any attempt to provide PCI-compliant tools to deployers for fear of re-implementing more mature identity management solutions. At the same time, deployers are taking on the additional burden of either deploying those fully featured identity management solutions just to support keystone, or are re-implementing these behaviors on top of keystone without community support.

Work will be done by Dolph Mattews, championed by Steve Martinelli

Existing functionality improvements

Shadow Users

Spec: http://specs.openstack.org/openstack/keystone-specs/specs/keystone/newton/shadow-users-newton.html

blueprint: https://blueprints.launchpad.net/keystone/+spec/shadow-users

Shadow users unifies all keystone identities (local, federation, LDAP, custom backend) by modeling identities into a single standard unique identifier.  These identities are mapped (or shadowed) and persisted in backend.  This feature facilitates account linking and allows for one user to be associated to multiple credentials.  For example, a user could be associated to a local identity, as well as federated identities.  In addition, Shadow Users provides for a consistent user experience and could lead to performance improvements through referential integrity.  

There is an uncertainty, however, how to define shadow user ID.

Options for the source of the user ID for LDAP & custom drivers:

  1. Operators define unique ID
    1. Pros: Minimal impact on existing deployments
    2. Cons: Non-standard identifier
  2. Keystone inject unique ID
    1. Pros: Standard identifier, referential integrity, optimized queries
    2. Cons: Data migration for existing deployments

Work will be done by Ron de Rose, championed by Steve Martinelli and Dolph Mattews.

Federation Enhancements

Guang Yee will be working on:

  1. Support mapping directly to a scoped token
  2. Deprecate "REMOTE_ID_ATTRIBUTE" configuration in keystone.conf. Move it to API. It should be part of Service Provider and protocol and manage programmatically.
  3. For X.509 tokenless authz, deprecate the "TRUSTED_ISSUER" configuration in keystone.conf. It should be part of mapping.
  4. Wishlist: "API keys" (non-interactive use case. scenario: SAML2 federation where "keystone.auth.plugins.saml2" doesn't work (relying on specific IdP feature, ECP, requiring customer changes).

Do we have any new API features in keystone that we don't have support for in the clients?

  • domain-specific roles aren't supported in openstack CLI
  • implied roles aren't supported in openstack CLI
  • One-Time Password
    • Best example of "MFA" that "automation" would use is something like "yubikey" (Thanks Guang Yee for the example!)
    • There is a patch up to ksc for a TOTP plugin https://review.openstack.org/#/c/281086/
    • There is currently no client support for multiple authentication factors (multiple plugins) handled at once
    • The client UX is a challenge, because you're passing multiple factors
    • Should OTP *always* be an interactive prompt? Most probably - yes, as Multifactor Authentication is not intended for automation cases

Stabilization

Deprecation plans

  1. Token revocation list: an old approach to operate entire list of revoked tokens.
  2. admin_token middleware: admin token stored in the config file is definitely not the most secure way to manage administrator access.
  3. keystonemiddleware s3 should be split into it's own repo and owned by swift-core.
  4. Key-value storages: SQL is doing just fine.

Things left to support Python 3

  1. Change memcached client: there are more stable implementations than just python-memcached.
  2. Replace python-ldap with ldap3: the latter supports py3 out-of-the-box.
  3. Replace uuid4().hex in tests with something generating unicode strings: need to pay attention to i18n.

Fernet is not the default yet!

To be the default it requires:

  1. Move revocation tree to SQL entirely and not reconstruct revocation tree every time.
  2. Support trust authN for API v2.0 - there were some problems
  3. It is believed performance improvement is possible.

Database upgrades

The main goal now is how to keep a cluster running during an upgrade: "minimal" downtime upgrades. There was no specific discussion in keystone, but here are some resources related from other teams:

  • "Lockless Upgrades" presentation with some results of testing DB performance when changing the schema 

Testing

 Testing discussion ended up in the following summary (with volunteers to do it!):

  • add devstack plugins for federation (breton)
  • add devstack plugins for ad/ldap (already exist) - just needs more sample data
  • add tests for the different routes (rodrigods)
  • add tests for federation (rodrigods, breton)
  • add tests for ldap
  • add tests for keystoneclient (samueldmq / nisha)
  • add devstack plugins for k2k (gsilvis, knikolla, jamielennox)
  • add tests for k2k (gsilvis, knikolla)

Regretfully we still have unit tests that are failing under python 3.4:

  • keystone.tests.unit.common.test_ldap
  • keystone.tests.unit.test_backend_ldap
  • keystone.tests.unit.test_backend_ldap_pool
  • keystone.tests.unit.test_v2
  • keystone.tests.unit.test_v3_auth
  • keystone.tests.unit.test_v3_credential
  • keystone.tests.unit.test_v3_oauth1
  • keystone.tests.unit.token.test_fernet_provider

The most interesting part, as expected, was seting up a gate for running federation tests using Tempest. It's known that keystone to keystone federation required more than one keystone while test environments usually provide no more than one!

So the topic was left for ML discussion (http://lists.openstack.org/pipermail/openstack-dev/2016-March/091058.html), and on the Summit we agreed on the following steps:

Steps:

  1. 2 keystone instances or 1? Start testing with 1 instance and then use these tests to help checking new CI environments with 2 instances/devstacks/containers.
  2. Federation set up (There is a question still: where we create such devstack VMs?)
    1. Set up shibboleth with devstack
    2. Set up keystone-related things from setUp in tests, not from devstack
    3. Set up IdP
      1. Use keystone as an IdP
      2. Use third-party IdP (pysaml2-based, probably)
  3. Federation tests (trivial using tempest)

Integration bits and pieces

V3 only devstack. When?

The answer is "Not yet". Currently we are on the step: "make a TC proposal that at some point failing v3-only job is not keystone's problem"

Non-python keystoneauth

This question was raised without notable effect and a referense to "ton of SDKs": https://wiki.openstack.org/wiki/SDKs#OpenStack_Toolkits

Migration libraries from keystoneclient to keystoneauth

is under way. 

Using the auth_token auth plugin

Progress is - nobody's using it. Still there in auth_token. Blocked by trying to figure out if oslo.context is going to handle this.

django-openstack-auth

Work is in progress on keystone to keystone federation plugin

keystoneauth

Decided to add entrypoint for saml or kerberos auth

keystone-manage bootstrap

deprecating ADMIN_TOKEN in newton

Shadow Users, LDAP and Federation

At first, let me describe the wishes:

LDAP as the base identity driver should be deprecated, and instead we only support domain specific backends.  We need to migrate people using LDAP Identity only.

Federation is the main way people should integrate with Keystone.

Even LDAP can and should be done via Federation via mod_lookup_identity.  We need to split what a cloud admin has to do from what a Federation IdP admin can do in setting up the mapping table.

Conditions:

  • Keep need to be able to map an existing user that has not yet visited Keystone
  • Mapping should be split into two parts:
    • Idp to Domain (set by cloud admin)
    • Mapping JSON (set by IdP admin

Scenario 1:

From an IdP -> Groups and Users

Should we create Shadow Groups... ? That is the question.

- Create a role assignment between a shadow user and project: currently assignment is set upon first authentication.

- Create a role assignment between a shadow group and project: that's shadow group is needed for.

Scenario 2:

Mapping from an IdP -> Users

The reality:

Though current situation is that straight LDAP will never go away in favor of federation as too many deployments with LDAP already exist and it's easier to use.