Loading…
Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Nova [clear filter]
Tuesday, November 5
 

11:15am

Conductor Tasks Next Steps
Look at tasks as first class citizens in Nova. We can trigger all api actions from the conductor, but leave db read outs in the compute api. Work towards detecting when a worker has died, and reporting or recovering from the error. We can report all the progress and errors through the new Task API that has been proposed.

This work could lead us to a position where we can consider using TaskFlow.

More discussion see:
https://etherpad.openstack.org/IcehouseConductorTasksNextSteps

Its related to the task API session:
http://summit.openstack.org/cfp/details/176

(Session proposed by John Garbutt)


Tuesday November 5, 2013 11:15am - 11:55am
AWE Level 2, Room 204-205

12:05pm

Instance tasks in API
This session will include the following subject(s):

Instance tasks in API:

As a sort of followup to the instance actions work, and to help facilitate https://blueprints.launchpad.net/nova/+spec/soft-errors I would like to discuss the idea of tasks as their own API resource. This should simplify reporting task failures and allow better tracking of progress. An eventual goal would be the ability to cancel a task.

(Session proposed by Andrew Laski)

feature of Cancel:

In openstack, we should implement the cancel function for the time-consuming API.
For example, Live migration in Nova and Upload to Glance in Cinder and ...
It is useful and kind for user to implement cancel API(or some function) by openstackers.
Here, time-consuming mean what API processing time varies depending on the size of the resource.

(Session proposed by haruka tanizawa)


Tuesday November 5, 2013 12:05pm - 12:45pm
AWE Level 2, Room 204-205

2:00pm

Nova Un-conference (Tuesday)
This session will give an opportunity to cover a variety of topics for 10 minutes each. If you would like to participate, please sign up for a slot on the following etherpad.

https://etherpad.openstack.org/p/NovaIcehouseSummitUnconference

(Session proposed by Russell Bryant)


Tuesday November 5, 2013 2:00pm - 2:40pm
AWE Level 2, Room 204-205

2:50pm

Nova live-snapshot feature
Overview:
Work was done during the Havana development cycle to implement "live-snapshot" for virtual machines in Nova. A live-snapshot is basically a more complete snapshot of an instance. In addition to its disk(s), the memory and processor state is also snapshotted. This allows for a quick launch of an instance from the snapshot instead of the regular cold-boot from a snapshot image.

As it turns out, there were some concerns with the hypervisor support for this feature and it did not make it in for the Havana release. This design session is to revive the feature to determine which hypervisors will support it as well as discuss the concerns over the current Libvirt/KVM implementation.

There are also a few other question that should be hashed out for this feature:
[a] How are we going to handled attached volumes? How do we handle booting with a different set, or different ordering, of volumes?

[b] How are networks going to be affected? Can a live-snapshot be booted with a different set of networks?

[c] Changes in flavours when a live-snapshot is booted -- We have a live-snapshot of an instance with 512MB of memory, but trying to boot a 1GB instance.

[d] Can CPUID heterogenous clouds be supported? How about cloud composed of different hypervisor versions? Any other host-level issues with live-snapshots.

[e] There also appears to be a need for an in-guest agent to help reconfigure the live-snapshot once its booted. How does nova communicate with the agent? When does the agent know to reconfigure the instance and how does it get the information.

(Session proposed by David Scannell)


Tuesday November 5, 2013 2:50pm - 3:30pm
AWE Level 2, Room 204-205

3:40pm

Improve VM boot times, by pre-starting VMs
Windows VMs take a while to boot due to sysprep, and sometimes need to complete installs of things like SQL Server.

You can pick a selection of popular image/flavor combinations that are going to be more quickly available.

We can start up the VM, do steps to make the VM unique (sysprep, etc), then stop the VM (note its not owned by a tenant) then personalize the image as required on the following boot of the VM, transfering the IP and other resources to the appropriate user.

This means the scheduler could first look for a pre-started VM, then look for an empty host, if there are no pre-started VM that could be taken.

A service outside of nova can monitor the current level to pre-started VMs, and start more as needed. Notifications can be sent when each pre-started VM is claimed, so it can be used to trigger the starting of new VMs.

(Session proposed by John Garbutt)


Tuesday November 5, 2013 3:40pm - 4:20pm
AWE Level 2, Room 204-205

4:40pm

Implementing Private clouds on Nova via aggregates
A private cloud in this context is set of compute hosts that a tenant has dedicated access to. The host themselves remain part of the Nova configuration, i.e this is different from bare metal provisioning in that the tenant is not getting access to the Host OS - just a dedicated pool of compute capacity. This gives the tenant guaranteed isolation for their instances, at the premium of paying for a whole host.

The concept was discussed briefly (with a good reception) at the Havana summit, since when it has been evolved into a working prototype based almost exclusively on existing aggregate functionality. (In effect it provides a user facing abstraction to aggregates). Although this looks like a scheduler topic it is mainly about a new layer of aggregate manipulation logic on the API server.

Operations a User can perform:
- Create a Pcloud
- Allocate hosts to a Pcloud (of a specific host-flavor and AZ)
- See hosts and instances in the Pcloud
- Schedule instances to a Pcloud
- Give access to other tenants to schedule to the plcoud
- Change the scheduling parameters (cpu allocation ratio, memory allocation ratio)

Operations an Cloud administrator can perform:
- Define a host flavor
- Mark specific physical hosts as being available as a host-flavor

I want to use this session to describe the approach and get feedback on both the current design and what additional features are required.
I also want to discuss some of the potential race conditions in host allocation, and what might be done to address those, and touch on the
performance aspects of using aggregates for scheduling, and what might be done to improve those.


(Session proposed by Phil Day)


Tuesday November 5, 2013 4:40pm - 5:20pm
AWE Level 2, Room 204-205

5:30pm

Flavor level capabilities
A provider may offer many different types of visualization services, and not all Nova features may be offered by all services, or they may want to limit a feature on some flavors. Currently an api call has to make it all the way down to the driver before it can bubble back up that the driver doesn't support that feature (and is very inefficient). I would like to discuss a way that we could discover this at a higher level.

A first thought would be able to assign a set of available capabilities at the flavor level. I'm also open to discussing other ways that such a feature may be implemented.

(Session proposed by creiht)


Tuesday November 5, 2013 5:30pm - 6:10pm
AWE Level 2, Room 204-205
 
Wednesday, November 6
 

11:15am

Nova Project Structure and Process
The Nova project has been growing rapidly. With this growth comes some growing pains. Let's discuss the project's leadership structure and the processes used to manage our work. In particular, we'll discuss some areas where we need people to step up and lead, as well as some changes to blueprint review and prioritization for Icehouse.

It is particularly important for those involved with leading sub-teams to come to this session, so that we have a clear understanding of how the process will work this cycle.

(Session proposed by Russell Bryant)


Wednesday November 6, 2013 11:15am - 11:55am
AWE Level 2, Room 204-205

12:05pm

Rethinking scheduler design
Currently the scheduler exhaustively searches for an optimal solution to requirements given in a provisioning request. I would like to explore breaking down the scheduler problem in to less-than-optimal, but "good enough" answers being given. I believe this approach could deal with a couple of current problems that I see with the scheduler and also move us towards a generic scheduler framework that all of OpenStack can take advantage of:

-Scheduling requests for a deployment with hundreds of nodes take seconds to fulfill. For deployments with thousands of nodes this can be minutes.

-The global nature of the current method does not lend itself to scale and parallelism.

-There are still features that we need in the scheduler such as affinity that are difficult to express and add more complexity to the problem.

Etherpad: https://etherpad.openstack.org/p/RethinkingSchedulerDesign

IRC discussion: #openstack-scheduling on freenode


(Session proposed by Michael H Wilson)


Wednesday November 6, 2013 12:05pm - 12:45pm
AWE Level 2, Room 204-205

2:00pm

Nova Un-conference (Wednesday)
This session will give an opportunity to cover a variety of topics for 10 minutes each. If you would like to participate, please sign up for a slot on the following etherpad.

https://etherpad.openstack.org/p/NovaIcehouseSummitUnconference

(Session proposed by Russell Bryant)


Wednesday November 6, 2013 2:00pm - 2:40pm
AWE Level 2, Room 204-205

2:50pm

Smarter resource placement for intense workloads
Intense workloads on Openstack are here to stay. This require us to place aggregate/group workloads (in terms of compute, storage, network) in an optimal fashion to effectively utilize available resources. Over the last few months, the scheduler subgroup has made good progress in defining new directions for the scheduler. In this session, in addition, we discuss placement algorithms and constraints that are aligned with the subgroup activities.

https://docs.google.com/document/d/1cR3Fw9QPDVnqp4pMSusMwqNuB_6t-t_neFqgXA98-Ls/edit#heading=h.sxmednu8fdh5

Collaborators:
* Gary Kotton
* Yathi Udipi

(Session proposed by Debo~ Dutta)


Wednesday November 6, 2013 2:50pm - 3:30pm
AWE Level 2, Room 204-205

3:40pm

Extensible Scheduler Metrics
Session Lead(s): Paul Murray (HP)

Context is how to create a mechanism that allows new resource metrics to be added to the scheduler
in a flexible way.

User Stories:
- I want to be able to define a new filter that tracks network badwidth entitlement
- I want to be able to define a new filter that tracks cpu entitilement
- I want to be able to schedule based on utilization.
- I want to be able to schedule based on power consumption.
.... PCI-style metrics in scheduling ('I have a limited number of X per host' versus continuous (CPU) or nearly so (memory) metrics)

References:
https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling


Topics for discussion:

* What should the Nova scheduler's relationship be with Celiometer ? Possible answers are:
- Provide metrics to Ceilometer (or, at least, offer the same sources to Ceilometer)
- Optionally Consume additional metircs from Ceilometer for advanced scheduling
- Depend on Ceilometer for metrics
- An extensible plugin framework on nova compute node to collect various metrics data to be used by nova schedulers, that kind of data could also be sent to ceilometers for other advanced usages like alarming.

* What needs to be changed in the current Nova data model (flavours / compute_nodes)
- See https://docs.google.com/document/d/1m7Uda4lgNOyAUnlJuHi2m1nqjp1Gi4T13RBUNBgDhNM

(Session proposed by Lianhao Lu)


Wednesday November 6, 2013 3:40pm - 4:20pm
AWE Level 2, Room 204-205

4:40pm

Instance Group Model and API Extension
An instance group provides a mechanism to describe a topology of instances and the relationships between them, including the policies that apply to them. This is useful to create/schedule a group of instances, where we would like to apply certain policies like “anti-affinity”, “network-proximity”, “group-affinity”, etc., to a specific set of groups, and create them as a whole.

This is highly applicable for the Smart Resource Placement effort, and the APIs are the starting point for making combined smart decisions for scheduling the entire topology.

Scheduler meeting minutes on the API discussion:
http://eavesdrop.openstack.org/meetings/scheduler/2013/scheduler.2013-10-08-15.04.html

https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing

Collaborators: Yathi Udupi, Debo Dutta, Mike Spreitzer, Gary Kotton

(Session proposed by Yathiraj Udupi)


Wednesday November 6, 2013 4:40pm - 5:20pm
AWE Level 2, Room 204-205

5:30pm

Adding plugability at the API layer
This idea started a while back while working with Cinder and Lunr. When you already have a system that provides all of the provisioning and coordination, it seems like a bit of overkill to have all of overhead of the rest of the infrastructure to just pass through api calls.

I would like to propose and discuss the idea of having a plugin capability at the api layer of nova and other nova like services.

1. This would provide a clean abstraction for back ends that are trying to integrate with nova that already provide the infrastructure for provisioning resources.

2. This would enable more opportunities to innovate and experiment within a project without causing disruption to the current codebase.

3. This would provide a cleaner way to implement api compatability for openstack services.

I would like to engage in a discussion to see if there is interest in the larger community with pursuing further, as I think down the road this will provide a lot more flexibility.

(Session proposed by creiht)


Wednesday November 6, 2013 5:30pm - 6:10pm
AWE Level 2, Room 204-205
 
Thursday, November 7
 

9:00am

PCI Passthrough : the next step
after base pci passthrough there is some requirement from pci passthrough customer, like:
https://bugs.launchpad.net/nova/+bug/1222990

here is the things to address:

1. PCI API: enable admin to get pci information from hypervisor, instance, show detail pci device info.
The initial code is at https://github.com/yjiang5/pci_api, what we needed is:
a) v2/V3 API support
b) test case
c) document


2. Enhance migration/attach/de-attach PCI devices, especially for
software-aware migration like bonding driver.

3. Enhance extra_info for PCI devices. Currently we have extra info in DB to contain PCI device information like network_id, switch_ip_address etc, but there is no method to configure it.

4.Enhance the PCI devices white list to support pci address level filter.

5. Add PCI test to openstack test framework

6. store stats in seperated table to avoid every minite update.

7.Change the PF_function to be 00:0000:00.0 format,to get a uniform fromat between diffrent component.


(Session proposed by Yongli He)


Thursday November 7, 2013 9:00am - 9:40am
AWE Level 2, Room 204-205

9:50am

The road to live upgrades
We have made progress over the last few releases towards supporting live rolling upgrades of Nova. Let's get together again to review the status of this effort and the items we would like to accomplish in Icehouse.

(Session proposed by Russell Bryant)


Thursday November 7, 2013 9:50am - 10:30am
AWE Level 2, Room 204-205

11:00am

Nova Objects update and plans forward
We need to talk about the current state of objects, as well as what else needs to be done (hint: plenty). We also need to discuss breaking out the common base into Oslo so that other projects can use them.

(Session proposed by Dan Smith)


Thursday November 7, 2013 11:00am - 11:40am
AWE Level 2, Room 204-205

11:50am

Metadata Service Enhancements: Callbacks + Network
Callbacks:

Building on set-password, provide a generic mechanism to set (but not overwrite) server metadata, so we can have a way to communicate with in guest actions. The existing user quotas on metadata should still apply.

Example use case: set "boot_complted_seconds=20" when looking to profile your actual boot times


Networking Information:

Lets look at improving the networking info given in both config drive and the metadata service to instances.

This would turn into a neutron API proxy, but it seems messy to have two metadata service API endpoints for users of openstack, so maybe this is the best trade-off? We could look at moving it into neutron, with neutron just proxying the nova information, but that makes config drive much harder to implement.

Example use case: have first interface configure via DHCP, but have other interfaces on private networks, and static configured.


More discussion here:
https://etherpad.openstack.org/IcehouseNovaMetadataService

(Session proposed by John Garbutt)


Thursday November 7, 2013 11:50am - 12:30pm
AWE Level 2, Room 204-205

1:50pm

Nova Un-conference (Thursday)
This session will give an opportunity to cover a variety of topics for 10 minutes each. If you would like to participate, please sign up for a slot on the following etherpad.

https://etherpad.openstack.org/p/NovaIcehouseSummitUnconference

(Session proposed by Russell Bryant)


Thursday November 7, 2013 1:50pm - 2:30pm
AWE Level 2, Room 204-205

2:40pm

Horizontally scalable db backend
Registering this as a Nova session, but it really applies much more broadly.

I'd like to discuss adding an alternative db backend to Nova/Glance/Keystone/etc..

SQLAlchemy offers a diverse set of backends like MySQL, PostgreSQL, sqlite, etc, but none of those meet the needs of a system like OpenStack.

OpenStack needs a horizontally scalable, reliable, failure tolerant data store.

Amazon's seminal Dynamo paper is an inspiration in this space and is the basis for Riak and Cassandra which both seem like very likely backend candidates for this work.

I'd like to suggest using a somewhat backend agnostic approach, so instead of using Riak or Cassandra directly, we could target a well-known API such as AWS SimpleDB or DynamoDB. Client libraries already exist, the data store already exists (in the shape of AWS's own implementations), and its behaviour is well defined and well understood and it's known to scale.

BasicDB, an implementation of Amazon's SimpleDB, recently saw the light of day and a DynamoDB implementation might follow in its tracks, so there's a path towards an entirely free deployment, but the consumer side doesn't need to wait for the server to exist before getting started.

(Session proposed by Soren Hansen)


Thursday November 7, 2013 2:40pm - 3:20pm
AWE Level 2, Room 204-205

3:30pm

Nova V3 API
The V3 API is marked as experimental. The discussion would be around what
we need to do to get it to be considered stable enough to be the default
API.

Topics to discuss:
- Enforce access at API level, not DB level
- Cleanup (XML and consistency generally)
- Review of what is core/non-core
- Security related issues
- eg. remove os-personalities?
- what else?
- Remove pagination support?
(see http://summit.openstack.org/cfp/details/6)
- Multiple create support for instance creation
- 207 status
- proper networking support
- Automating spec creation (beyond api samples)
- What level of testing do we need to consider the V3 API to be releasable as the default?

Also maybe something on Pecan/WSME for Nova, but this may be
appropriate in a separate session or included in a more general
pecan/wsme discussion.

(Session proposed by Christopher Yeoh)


Thursday November 7, 2013 3:30pm - 4:10pm
AWE Level 2, Room 204-205

4:30pm

Using Pecan/WSME for the Nova V3 API
This session will include the following subject(s):

Using Pecan/WSME for the Nova V3 API:

A discussion around whether we should replace wsgi with Pecan/WSGI for the V3 API. And if so, how can we do this without the level of disruption that occurred with the initial V3 work done in Havana.

Note that I think this discussion would be a lot more productive if it was scheduled some time after the proposed oslo session "Creating REST services with Pecan/WSME"

http://summit.openstack.org/cfp/details/154

(Session proposed by Christopher Yeoh)

API validation for the Nova v3 API:

32% of Nova v3 API parameters are not validated with any ways[1].
If some clients just send an invalid request, an internal error happens and OpenStack operators should research its reason.
It would be hard work for the operators.

For Nova v3 API, WSGI will be replaced with Pecan/WSME. and some relational sessions have been proposed.
We'd better to implement basic validation features on WSME if we need more features.

Now some basic features are proposed[2], and I hope we make a consensus about this features and discuss how to implement them.

This session is related to "Using Pecan/WSME for the Nova V3 API" session(http://summit.openstack.org/cfp/details/165).
It is better to schedule this after "Using Pecan/WSME for the Nova V3 API" session or merge this into it.

[1]: https://wiki.openstack.org/wiki/NovaApiValidationFramework
[2]: http://lists.openstack.org/pipermail/openstack-dev/2013-October/016635.html


(Session proposed by Ken'ichi Ohmichi)


Thursday November 7, 2013 4:30pm - 5:10pm
AWE Level 2, Room 204-205

5:20pm

Cross project request ids
With tempest now running tests in parallel it has become a lot more difficult to debug failures because request ids are project specific and timestamps can not be reliably used because of the number of tests running at the same time. Request ids which cross service boundaries would make debugging test failures in the gate much easier.

Some work was done during Havana to implement this but it currently appears stalled due to some security concerns. See

https://review.openstack.org/#/c/29342
https://review.openstack.org/#/c/29480

I think a session would really help find a solution to these concerns.

(Session proposed by Christopher Yeoh)


Thursday November 7, 2013 5:20pm - 6:00pm
AWE Level 2, Room 204-205
 
Friday, November 8
 

9:00am

VMWare Driver Roadmap For Icehouse
Areas where we would like to improve the VMWare Driver for nova. This includes

* Scheduler enhancements such as
* Supporting local storage
* Image aware scheduling
* Improving live migrations
* Adding awareness of the topology of the host aggregate

* Nova snapshots
* Handing multiple files that result in a snapshot
* Handling attached volumes

* Running with multiple nova-compute agents

Misc improvements
* Hot-plug NICs
* VM Pause/Resume
* VNC password per VM
* Improve unit test coverage
* Move to mock from mox

Please see the etherpad for more details - https://etherpad.openstack.org/p/T4tQMQf5uS

(Session proposed by Tracy Jones)


Friday November 8, 2013 9:00am - 9:40am
AWE Level 2, Room 204-205

9:50am

libvirt driver roadmap
This session is for those interested in discussing new development for the libvirt compute driver.

(Session proposed by Russell Bryant)


Friday November 8, 2013 9:50am - 10:30am
AWE Level 2, Room 204-205

11:00am

Docker support in OpenStack
Let's discuss how Docker support can be improved in OpenStack. Especially in Nova. Starting with the Havana release, there is a Docker driver that allows Nova to deploy instances using Docker containers.

(Session proposed by Sam Alba)


Friday November 8, 2013 11:00am - 11:40am
AWE Level 2, Room 204-205

11:50am

XenAPI Roadmap for Icehouse
Get to together and set priorities for the icehouse work on the XenServer support.

Including things like:
* gating tests for XenServer
* Review any hypervisor support gaps
* using/building supported interfaces
* Live-migrate based Resize (remove dependency on rsync)
* PCI passthrough
* Support for virtual GPUs
* Supporting LVM based storage
* Local storage volumes

For details see:
https://etherpad.openstack.org/IcehouseXenAPIRoadmap

(Session proposed by John Garbutt)


Friday November 8, 2013 11:50am - 12:30pm
AWE Level 2, Room 204-205

1:30pm

Nova Un-conference (Friday)
This session will give an opportunity to cover a variety of topics for 10 minutes each. If you would like to participate, please sign up for a slot on the following etherpad.

https://etherpad.openstack.org/p/NovaIcehouseSummitUnconference

(Session proposed by Russell Bryant)


Friday November 8, 2013 1:30pm - 2:10pm
AWE Level 2, Room 204-205

2:20pm

Work in IceHouse around nova DB
This session will include the following subject(s):

Work in IceHouse around nova DB:

We done a lot of work in Havana around DB in whole OpenStack and especially Nova. I don't see any reason to stop continue improving it.


Nova is also favorite in DB related things, so it will be nice to discuss next points:
1) Use oslo.db lib

2) Switch from sqla-migrate to alembic
(we found already one approach that allows us to make this all without rewritting old migartions)

3) Keep synced models & migrations
https://review.openstack.org/#/c/42307/

4) Use in unit tests DB created from models

5) Run unit tests against all backends (not only sqlite)

6) If we implement 3 & 4 we could drop support of sqlite in migration

7) Get rid of soft delete or implement fast and safe purge engine (not archiving).

We have a lot of problems with soft deletion:
1) must have purge engine
2) bad performance
3) complicated logic

It seems that in almost all cases we are able to delete instantly records. So we should deep analyze this situatiin & probably refactor current DB to get rid of soft deletion.


8) Get from DB only what we actually need. Now we are always getting 3 columns "create_at", "updated_at", "deleted_at" (it is half of DB traffic sic.. ) and they are pretty rare used.










(Session proposed by Boris)

Next steps for database improvement:

I'd like to see a general database session in the nova track.

Proposed items to discuss:

* the database CI stuff that Josh Hesketh and I have been working on
* whether a move to alembic makes sense

(Session proposed by Michael Still)


Friday November 8, 2013 2:20pm - 3:00pm
AWE Level 2, Room 204-205

3:10pm

AWS compatibility perspective back into OpenStack
The growth of OpenStack has been largely inclusive of it's own API, making it an excellent choice for private cloud
solutions. However, large cloud vendors such as Amazon AWS remain dominant and continue to excel in the public cloud space.
OpenStack began by providing EC2 API compatibility, and still does, but the growth and innovation around OpenStack API needs to be complemented with continued support for EC2 API, and maintain compatible feature parity with AWS, for it to be really a strong choice for public cloud migration, and adoption.
I think compatibility is an important middle-ground between innovation and existing user migration, and shouldn't be ignored.

A part of the community can contribute to ensuring AWS compatibility in parallel with the ongoing innovation efforts around OpenStack API.
This session proposes the following:
* Present data to show EC2 API compatibility gaps in the current software
* Discuss where third party compatibility testing has lacked, and needs improved coverage, particularly Tempest
* Call for ways the community can collaborate on development efforts for third party compatibility

(Session proposed by Rohit Karajgi)


Friday November 8, 2013 3:10pm - 3:50pm
AWE Level 2, Room 204-205

4:10pm

Hyper-V: Icehouse features
Discussing the features that we want to add for Hyper-V support in Icehouse

(Session proposed by Alessandro Pilotti)


Friday November 8, 2013 4:10pm - 4:50pm
AWE Level 2, Room 201B