Thursday, July 16, 2015

Volume attach code flow in Cinder




OpenStack Cinder – Volume Attach flow


Intro

Something that comes up fairly often in IRC, is “how does attach work”?  From a Cinder perspective the code path is fairly simple, but it seems to throw people for a loop.  So, I figured why not take a look at the reference implementation and walk through the steps of volume attach from the Cinder side.

Our Reference

The Cinder project includes a reference driver, so we’ll use that as our reference here to walk through the code.  The reference driver is built in Cinder using a combination of LVM and iSCSI targets (tgt-adm or LIO most commonly).  As with everything in OpenStack you have choices, we’re just going to focus on the default options here, thick provisioned LVM and TgtAdm for the iSCSI component.  We’re also using the default libvirt/KVM config on our Nova side.

A few high level details

It’s important to understand that most of the work with respect to attaching a volume is done on the Nova side.  Cinder is mostly just responsible for providing a volumes information to Nova so that it can make an iSCSI attach on the Compute Node.
The communication path between Nova and Cinder is done via the cinderclient, the same cinderclient a command line user accesses; however Nova uses a special policy that allows it to access some details about a volume that regular users can’t, as well as a few calls you might not have seen before.
So what we’re going to do is look at an OpenStack deployment that has a volume ready to go (available) and an Instance that’s up and ready.  We’ll focus on the calls from Nova to Cinder and Cinders response.  In a follow up post we’ll dig into what’s happening on the Nova side.

Process flow

As I mentioned, things on the Cinder side are rather simple.  The attach process is just three calls to Cinder:
  1. reserve_volume
  2. intialize_connection
  3. attach

reserve_volume(self, context, volume)

context: security/policy info for the request
volume: reference object of the volume being requested for reserve
Probably the most simple call in to Cinder.  This method simply checks that the specified volume is in an “available” state and can be attached.  Any other state results in an Error response notifying Nova that the volume is NOT available.  The only valid state for this call to succeed is “available”.
If the volume is in fact available, we immediately issue an update to the Cinder database and mark the status of the volume to “attaching” thereby reserving the volume so that it won’t be used by another API call anywhere else.

initialize_connection(self, context, volume, connector)

context: security/policy info for the request
volume: reference object of the volume being requested for reserve
connector: information about the initiator if needed (ie targets that use access groups etc)
This is the only Cinder API method that really has any significant work to do, and it’s the only one that really has any real interaction with the storage backend or driver.  This method is responsible for building and returning all of the info needed by Nova to actually attach the specified volume.  This method returns vital information to the caller (Nova) that includes things like CHAP credentials, iqn and lun information.  An example response is shown here:

{‘driver_volume_type': ‘iscsi’,  ‘data': {‘auth_password': ‘YZ2Hceyh7VySh5HY’,
                ‘target_discovered': False,
                ‘encrypted': False,
                ‘qos_specs': None,
                ‘target_iqn': ‘iqn.2010-10.org.openstack:volume-8b1ec3fe-8c5
                ‘target_portal': ‘11.0.0.8:3260′,
                ‘volume_id': ‘8b1ec3fe-8c57-45ca-a1cf-a481bfc8fce2′,
                ‘target_lun': 1,
                ‘access_mode': ‘rw’,
                ‘auth_username': ‘nE9PY8juynmmZ95F7Xb7′,
                ‘auth_method': ‘CHAP’}}

In the process of building this data structure, the Cinder manager makes a number of direct calls to the driver.  The manager itself has a single initialize_connection call of it's own, but ties together a number of driver calls from within that method.    

        driver.validate_connector
            Simply verifies that the initiator data is included in the passed in 
            connector (there are some drivers that utilize pieces of this connector
            data, but in the case of the reference, it just verifies it's there). 

        driver.create_export
            This is the target specific, persistent data associated with a volume.
            This method is responsible for building an actual iSCSI target, and
            providing the "location" and "auth" information which will be used to
            form the response data in the parent request.
            We call this infor the model_update and it's used to update vital target
            information associated with the volume in the Cinder database.


        driver.intialize_connection
            Now that we've actually built a target and persisted the important
            bits of information associated with it, we're ready to actually assign
            the target to a volume and form the needed info to pass back out
            to our caller.  This is where we finally put everything together and
            form the example data structure response shown earlier.



            This method is sort of deceptive, it does a whole lot of formatting
            of the data we've put together in the create_export call, but it doesn't
            really offer any new info.  It's completely dependent on the information
            that was gathered in the create_export call and put into the database.  At
            this point, all we're doing is taking all the various entries from the database
            and putting it together into the desired format/structure.

            The key method call for updating and obtaining all of this info was
            done by the create_export call.  This formatted data is then passed
            back up to the API and returned as the response back out to Nova.


At this point Nova can use the returned info and actually make the iSCSI attach on the compute node, and then pass the volume into the requested Instance.  If there are no errors, the volume is now actually attached to the Instance as a /dev/vdX device and ready for use.  Remember however there was still one Cinder call left in our list:  attach.


attach(self, context, volume, instance_uuid, host_name,
mount_point, mode)


context: security/policy info for the request
volume: reference object of the volume being requested for reserve
instance_uuid: UUID of the Nova instance we've attached to
host_name: N/A for the reference driver
mount_point: device mount point on the instance (/dev/vdb)
mode: The attach mode of the Volume (rw, ro etc)
This is another method that falls into a category I call "update methods".  It's purpose is to notify Cinder to update the status of the volume to "in-use" (attached) and to populate the database with the provided information regarding "where" it's attached to.  
This also provides a mechanism to send notifications and updates back to the driver.