Loading…
Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

TripleO (Deployment) [clear filter]
Tuesday, November 5
 

11:15am

Deployment scaling/topologies
This session will include the following subject(s):

Scaling design:

What does scaling mean at the tripleo level; where do we do well, where are lacking, what do we need to design?

(Session proposed by Robert Collins)

Tuskar/TripleO support different architectures:

Justification
-------------
Tuskar should support deploying different architectures of TripleO for the
undercloud and overcloud.

Supporting a flexible deployment architecture allows Tuskar to deploy OpenStack
to meet different goals as needed. In particular, things like:

- different network topologies, such as isolated L2 network domains
- different scaling methodologies to support HA

Description
-----------
Current example deployments of TripleO use an all in one node for the
Undercloud, and an Overcloud with 1 control node and 1 compute node. These are
just the example (and well documented deployments). Tuskar should not tie you
to any one deployment architecture and instead allow for deploying flexible
architectures.

Currently Tuskar assumes that the Undercloud is already deployed, and then uses
the Undercloud services to orchestrate a deployment of the Overcloud.

The flexiblity should apply to both the Undercloud and Overcloud.

Proposal(s)
-----------
1. Define a method so that Tuskar can bootstrap the Undercloud.
- We could start with a seed vm that Tuskar talks to deploy a simple
Undercloud.
- Once the Undercloud is up, reconfigure Tuskar to talk to the Undercloud.
- Proceed with deploying the Overcloud, or scaling out the Undercloud, etc.

2. Define a mechanism for Tuskar to be able to deploy Undercloud nodes after
the seed is removed.
- One such mechanism could be using Heat on the Undercloud to deploy
additional Undercloud nodes (as opposed to just using the Undercloud heat
to deploy the Overcloud).
- The Undercloud may need to scale out new baremetal compute nodes as new
isolated L2 domains are brought online (new rack for instance). Tuskar
should be able to provision these new Undercloud nodes as opposed to a
manual setup process.
- Other resource type nodes should be able to be added to the Undercloud by
Tuskar as well (such as a new network node).
- Configuration changes on the Undercloud should be managed by Tuskar.
Tuskar should be able to use the heat stack-update API to update the
Undercloud (assuming we've used the Undercloud Heat to launch those
nodes).

3. Define a method to deploy different heat stacks (could apply to both
Undercloud and Overcloud).
- Tuskar should have access to a library/repo of different templates for
different node types.
- A Tuskar admin should be able to select different templates (and a
quantity of each) and deploy it as a single stack
- A deployed stack should be able to be "stack updated" by adding additional
templates (or quantities) and updating the stack from Tuskar.


(Session proposed by James Slagle)


Tuesday November 5, 2013 11:15am - 11:55am
AWE Level 2, Room 203

12:05pm

HA/production configuration
This session will include the following subject(s):

HA next steps:

We have working ground-> overcloud now. But it's not HA! Lets get that done.

In particular we need to identify all the things to HA and talk through single node + 2 node + 3+ node setups of them.

(Session proposed by Robert Collins)


Tuesday November 5, 2013 12:05pm - 12:45pm
AWE Level 2, Room 203

2:00pm

stable branch support and updates futures
This session will include the following subject(s):

Image based updates:

Updating what we've deployed is a Good Idea. https://etherpad.openstack.org/tripleo-image-updates

(Session proposed by Robert Collins)

Stable branch for tripleo:

TripleO is currently focused on Continuous Delivery from master branches, we may also have a set of users who want to deploy using openstack's stable branches, I'd like to discuss if this something we would like to accommodate and how best can we could do it?


(Session proposed by Derek Higgins)


Tuesday November 5, 2013 2:00pm - 2:40pm
AWE Level 2, Room 203

2:50pm

CI and CD automation
This session will include the following subject(s):

How far do we push devtest.sh?:

devtest.sh has grown in length and complexity from the original devtest.md.

Do we want to keep evolving it until it becomes a defacto standard for bootstrapping a cloud? Where do we draw the lines here?

FWIW I would like to see us continue to decompose devtest.sh into components that will be individually useful to wider audiences, while making it easy for devs to spin up a full environment.

I'd like to get some wider opinions though, since we have multiple semi-intersecting interests here (devs, testing, deployers). If nothing else, the name might want to be revisited if/when toci merges into incubator :)

(Session proposed by Chris Jones)

CI testing:

We need toci in core - so we have scaled tests and get checks when nova changes occur etc.
Other things to consider talking about

What jobs should we be running?
When can we expect to be gating merges on CI tests?
What platforms should we target ?
How can we stabilize the reliability.

o Virtualised resources
o what have we got
o how can be best use them.

o Baremetal resources, same questions
o what have we got
o how can be best use them.
o Is there more resources available, where, when ?

We also need to setup a periodic job that updates a set of known working project git hashes which can be feed into any gerrit triggered jobs. So the gerrit commits wont be effected by breakages in unrelated projects.



(Session proposed by Robert Collins)


Tuesday November 5, 2013 2:50pm - 3:30pm
AWE Level 2, Room 203

3:40pm

Futures - network/discovery/topology modelling
This session will include the following subject(s):

TipleoO, Tuskar: autodiscovery of hardware:

tl;dr: tuskar assumes autodiscovery of physical nodes. Is this trivial,
already done, hard, can't be done? What can we do, how can we help?

Even the earliest versions of the Tuskar UI wireframes/mock-ups assumed
that '(at some point), we'll get a list of available physical hardware
here' even if that means just a list of IP or MAC addresses. The user
story is that after an operator has plugged in a new machine to a
top-of-rack switch (within a tuskar management network), the existence
as well as attributes of that machine are made available to the Tuskar
UI. Operators will not only be able to retrieve hardware information,
but also set constraints and conditions based on the hardware attributes
[1]. All of this implies autodiscovery and enrollment of hardware.

So, what does autodiscovery look like? Is it a brute force:
-1--> Talk directly to the switch and get a mac address - this will at
least let you know that a) machine MAC introduced to switch's MAC-table,
b) you can use that and catch IP assignment (over ARP for example). So
you get MAC and IP.

-2--> ???

-3--> profit


Discussion points:

* How do we find out about the other attributes we're interested in -
cpu, number of phy. cores, memory gbytes, etc etc.

* Is this really obvious as in are there already tools that do this for
us? Can IPMI do this for us?

* There is an existing Ironic blueprint outlining a solution [2][3], are
there any obvious blockers there?

* Is this pretty much sorted in Ironic/other_tool, in which case we
should just make sure Tuskar can talk to that without issue?

* What can Tuskar devs do to help Ironic/other_tool move this along?

[1]
http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf
[2] https://blueprints.launchpad.net/ironic/+spec/advanced-driver-api
[3] https://blueprints.launchpad.net/ironic/+spec/discovery-ramdisk


thanks! marios

(Session proposed by Marios Andreou)

TripleO for network infrastructure deployment:

Currently TripleO manages the deployment of bare-metal compute infrastructure for an OpenStack "overcloud". However this does not address the deployment of the networking infrastructure (e.g. leaf and spine switches) that provide connectivity for the overcloud. This is difficult to achieve with traditional switches since they are deployed in a non-standard fashion (often varies with vendors). However, the Open Network Install Environment (ONIE, http://onie.github.io/onie/) initiative, currently managed under the auspices of the Open Compute project (http://www.opencompute.org/), is being introduced to bootstrap the install of switch operating systems. This opens up the possibility of leveraging Ironic to switch operating system deployment in a standard way (vendor agnostic). We would like to seed this idea in this session and obtain feedback on the direction. This may also generate switch specific requirements for Ironic.

(Session proposed by Sumit Naiksatam)

Modelling infrastructure in TripleO Tuskar:

A key goal of TripleO Tuskar is to provide infrastructure management for an OpenStack deployment. Part of this is modelling the OpenStack services and hardware. Right now there are two key models in Tuskar, Resource Classes and Racks. Introduced by the new wire frames[1] is the idea of L2 Groups, which are designed to replace Racks.

An L2 Group represents a grouping of nodes that are all present on the same Layer 2 network. This allows Tuskar to represent physical reality without being tied to any specific hardware, like for example a Rack.

We need to decide on whether or not these two concepts are enough to model OpenStack infrastructure? If so what information do they need to contain and what behaviour should they have? Are there other things we need to model? Service Types? Hardware Types? Failure Domains? Generic grouping?

In this session we will explore ideas around TripleO Tuskar infrastructure modelling, assessing the suitability of the current model and look at ways to improve upon it.

The overall aim of the session is to gain a consensus for a sensible infrastructure model for version 1 of TripleO Tuskar.

Discussion points:

Resource Classes
Why are these valuable?
Is 1-many mapping from services to hardware sufficient for v1?
Should we support combined services in single resource classes?
L2Groups
What value does an L2 Group bring to TripleO Tuskar?
Is a model based on a layer 2 network a sensible approach?
If so what information do we need to store and how do we utilize these?
Services
How do we manage different services across OpenStack infrastructure?
Who is responsible for upgrades? Migrating of services? Gracefully shutting down services?
Is this part of a resource class or do we need separate model for this?
Do we even need to support this in v1?
Should we group OpenStack services together to create new ones?
To allow co-location of services for things like Compute and Block Storage performance?
Is this in the scope of V1?
Other Components
Are there scenarios in which the above concepts are not sufficient?
If so are these the edge case?
Do we want to support these scenarios in v1 of TripleO Tuskar?
Should we directly model hardware, Rack, Chassis, Power Supplies and so on?
Should we be more flexible in our modelling approach?
Do we want to offer a more generic approach altogether to modelling infrastructure?
How do we stop this becoming overly complex and difficult to use?
Do we shoot for simplicity (and be more opinionated?)


[1] http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf

(Session proposed by Martyn Taylor)


Tuesday November 5, 2013 3:40pm - 4:20pm
AWE Level 2, Room 203