Loading…
Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Wednesday, November 6
 

11:15am

Neutron QA and Testing
During Havana we made some substantial progress with Neutron in the gate, however it's still not quite at the same level in the gate as other projects. This session will be a joint QA/Neutron session (held in the Neutron track) to address some of these issues. These include:

Full tempest gating - what stands between us and being able to remove the limitation that neutron can only run the smoke jobs.

Parallel testing support - what stands between us and being able to run tests parallel against neutron environments.

Upgrade testing - what are the plans for getting neutron into grenade's configuration and upgrade testing.

(Session proposed by Matthew Treinish)


Wednesday November 6, 2013 11:15am - 11:55am
AWE Level 2, Room 202

12:05pm

Tempest Policy in Icehouse
Over time we've created a lot culture around the right and wrong ways to get code into tempest, where things are appropriate, the process by which we allow skips. We should set aside a session to capture all the lore we all agree on into an etherpad so that it can get turned into official documentation for the project.

We should also take a look forward at some of the questions and challenges that core team members are having with reviews. This includes how to address some of the new projects coming in, and what should be required of an integrated project.

(Session proposed by Sean Dague)


Wednesday November 6, 2013 12:05pm - 12:45pm
AWE Level 2, Room 202

2:00pm

Who watches the watchers - aka Tempest Unit Tests
During H3 I added some unit tests for tempest. The only tests there right now are very basic and just verify that the wrapper scripts that are used in tox return the correct exit code on success or failure (which was prompted by a bug in one script that super-fixed the gate for a few hours, ie every test always passed)

In the icehouse cycle it would good to start seeing unit tests being added for tempest so that we can verify that tempest works correctly. So, I think it'll be good to have a discussion on which parts of tempest require unit testing and how we should go about implementing them. In addition to anything else related to verifying that tempest itself works as expected and that we don't introduce any regressions through changes to the tempest code.

aka Quis custodiet ipsos custodes

(Session proposed by Matthew Treinish)


Wednesday November 6, 2013 2:00pm - 2:40pm
AWE Level 2, Room 202

2:50pm

Tempest Stress Test - Overview and Outlook
With bp stress-tests a new way of stress testing was introduced. This session will shortly show what is the current state of the implementation and we can discuss about the future steps.

(Session proposed by Marc Koderer)


Wednesday November 6, 2013 2:50pm - 3:30pm
AWE Level 2, Room 202

3:40pm

Parallel tempest moving forward
The week before feature freeze in havanna we switched tempest over from running tests serially to run all of its tests in parallel using testr. This greatly improved runtime and made tempest finish executing in about the same amount of time as the py26 unit tests for some of the other projects(20-30mins). Additionally, this helps shake loose more bugs in the projects by making multiple requests at once which more closely simulates an actual workload. However, moving to parallel execution came at the cost of changing how tests interact with each other and an increased risk of nondeterministic failures in the gate.

So moving into icehouse I think it's important that we discuss what additional changes need to be made to further improve parallel stability and if there are any other optimizations that we can make to improve tempest performance. Also, it will be a good forum to discuss things to watch out for when adding tests to tempest and possibly adding automated checks to catch common parallel execution issues before we merge them.

(Session proposed by Matthew Treinish)


Wednesday November 6, 2013 3:40pm - 4:20pm
AWE Level 2, Room 202
 
Thursday, November 7
 

1:50pm

Testing Rolling Upgrades
Going forward, OpenStack is going to need to be able to test rolling upgrades, which means multiple versions of services and components interacting with each other under test.

For the project as a whole, this means running with Havana Nova and Icehouse Keystone (for example). From Nova's perspective, this means running scenarios with an Icehouse control plane and Havana compute.

(Session proposed by Dan Smith)


Thursday November 7, 2013 1:50pm - 2:30pm
AWE Level 2, Room 202

2:40pm

Grenade Update
A look at changes and ideas for Grenade and possibilities to use it with non-DevStack deployments

(Session proposed by Dean Troyer)


Thursday November 7, 2013 2:40pm - 3:20pm
AWE Level 2, Room 202

3:30pm

enablement for multiple nodes test
Currently, tempest don't have test cases for the features that require multiple nodes(e.g. migrate, scheduler, filter, availability_zone, etc.). What I have saw is only 'live migration' tests. But it doesn't run in the gate job.

This is a proposal for multiple nodes testing enablement so that tempest can cover the features require multiple nodes in gate job. Some possible discussion about it:
* setup multiple nodes job
* running this job in gate job
* put test cases in scenario tests

(Session proposed by Zhi Kun Liu)


Thursday November 7, 2013 3:30pm - 4:10pm
AWE Level 2, Room 202
 
Friday, November 8
 

11:00am

Keystone needs on the QA Pipeline
Keystone has recently started pushing tests for the Keystone client toward Tempest. Without using Tempest, Keystone client tests cannot be run against a live server.

This session will be used as an exploration of the approaches that Keystone currently has to testing with Tempest, ideas the Keystone team has in integrating in new ways with the QA pipeline, and exploring holes that exist today with coverage of Keystone, and make sure we harmonize approaches being taken by both the Keystone and QA teams.

The expected output is a set of things to help ensure solid Keystone coverage in the QA pipeline in Icehouse.

(Session proposed by Adam Young)


Friday November 8, 2013 11:00am - 11:40am
AWE Level 2, Room 202

11:50am

Coverage analysis tooling
Back in the grizzly cycle the coverage extension for nova was introduced. It created an api to enable coverage collection and reporting for external programs on a running nova installation. A client script was added to tempest to use the extension and a periodic nightly run was setup to watch the coverage. However, the extension and the tooling to use it haven't been used much and we often fall back to manual inspection when we try to figure out testing coverage.

This session would be dedicated to figuring out exactly what kind of tooling is needed so that we can classify and improve tempest test coverage an enable other test suites to figure out coverage easily. Also, any other steps that could be used to increase automation of coverage analysis and improving the usefulness of the data. Some possible topics for discussion for this session could be:
* Improving or reworking existing coverage extension
* Adding a coverage extension to each project
* Additional tooling around coverage collection and analysis
* Running coverage on each gate run

(Session proposed by Matthew Treinish)


Friday November 8, 2013 11:50am - 12:30pm
AWE Level 2, Room 202

1:30pm

Future of Scenario testing
We've discussed scenario testing in Havana summit, as a result, that was introduced into Tempest.
In this session, we'll discuss the status of the implementation, missing testcases, documents and tools for increasing the coverage of the scenario testing.

The discussion points:
* The status of the scenario test cases
* What scenario test cases do we need to implement?
* Documents for implementing scenario tests cases
* Tools and/or frameworks for easy to add scenario test cases.


(Session proposed by Masayuki Igawa)


Friday November 8, 2013 1:30pm - 2:10pm
AWE Level 2, Room 202

2:20pm

Negative Testing Strategy
Tempest has a lot of negative tests but there is no shared view of how many there should be, or whether they should really
be part of unit tests. There have also been discussions about fuzz testing. Here are a few things that have been discussed and from which a new consensus might emerge:

- We could have a decorator or other kind of syntax that allows a declarative way to define negative tests, but which run in the
same way as existing tests. These would be easier to write and review.

- We could come up with a policy for negative test coverage and move most to unit tests.

- We could have a fuzz testing framework, possibly supported by some kind of type signature for apis to allow checking for
fencepost errors rather than just slinging random arguments

(Session proposed by David Kranz)


Friday November 8, 2013 2:20pm - 3:00pm
AWE Level 2, Room 202

3:10pm

Enhancing Debuggability in the Gate
As a wrap up session to the QA track I want to discuss ways to increase debuggability in the gate. As our tooling gets more complicated, and parallel testing gets more rigorous, being able to understand the output of gate testing is more and more critical. We need to push towards a world where an issue is debuggable with first fail capture in the logs.

This will be a brain storm and wrap up session to enumerate some priorities on debuggability, changes to tempest, the gate, tools that understand the gate artifacts, and core projects that could get us a better debugging story.

(Session proposed by Sean Dague)


Friday November 8, 2013 3:10pm - 3:50pm
AWE Level 2, Room 202