Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.
View analytic
Tuesday, November 5 • 3:40pm - 4:20pm
Futures - network/discovery/topology modelling

Sign up or log in to save this to your schedule and see who's attending!

This session will include the following subject(s):

TipleoO, Tuskar: autodiscovery of hardware:

tl;dr: tuskar assumes autodiscovery of physical nodes. Is this trivial,
already done, hard, can't be done? What can we do, how can we help?

Even the earliest versions of the Tuskar UI wireframes/mock-ups assumed
that '(at some point), we'll get a list of available physical hardware
here' even if that means just a list of IP or MAC addresses. The user
story is that after an operator has plugged in a new machine to a
top-of-rack switch (within a tuskar management network), the existence
as well as attributes of that machine are made available to the Tuskar
UI. Operators will not only be able to retrieve hardware information,
but also set constraints and conditions based on the hardware attributes
[1]. All of this implies autodiscovery and enrollment of hardware.

So, what does autodiscovery look like? Is it a brute force:
-1--> Talk directly to the switch and get a mac address - this will at
least let you know that a) machine MAC introduced to switch's MAC-table,
b) you can use that and catch IP assignment (over ARP for example). So
you get MAC and IP.

-2--> ???

-3--> profit


Discussion points:

* How do we find out about the other attributes we're interested in -
cpu, number of phy. cores, memory gbytes, etc etc.

* Is this really obvious as in are there already tools that do this for
us? Can IPMI do this for us?

* There is an existing Ironic blueprint outlining a solution [2][3], are
there any obvious blockers there?

* Is this pretty much sorted in Ironic/other_tool, in which case we
should just make sure Tuskar can talk to that without issue?

* What can Tuskar devs do to help Ironic/other_tool move this along?

[1]
http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf
[2] https://blueprints.launchpad.net/ironic/+spec/advanced-driver-api
[3] https://blueprints.launchpad.net/ironic/+spec/discovery-ramdisk


thanks! marios

(Session proposed by Marios Andreou)

TripleO for network infrastructure deployment:

Currently TripleO manages the deployment of bare-metal compute infrastructure for an OpenStack "overcloud". However this does not address the deployment of the networking infrastructure (e.g. leaf and spine switches) that provide connectivity for the overcloud. This is difficult to achieve with traditional switches since they are deployed in a non-standard fashion (often varies with vendors). However, the Open Network Install Environment (ONIE, http://onie.github.io/onie/) initiative, currently managed under the auspices of the Open Compute project (http://www.opencompute.org/), is being introduced to bootstrap the install of switch operating systems. This opens up the possibility of leveraging Ironic to switch operating system deployment in a standard way (vendor agnostic). We would like to seed this idea in this session and obtain feedback on the direction. This may also generate switch specific requirements for Ironic.

(Session proposed by Sumit Naiksatam)

Modelling infrastructure in TripleO Tuskar:

A key goal of TripleO Tuskar is to provide infrastructure management for an OpenStack deployment. Part of this is modelling the OpenStack services and hardware. Right now there are two key models in Tuskar, Resource Classes and Racks. Introduced by the new wire frames[1] is the idea of L2 Groups, which are designed to replace Racks.

An L2 Group represents a grouping of nodes that are all present on the same Layer 2 network. This allows Tuskar to represent physical reality without being tied to any specific hardware, like for example a Rack.

We need to decide on whether or not these two concepts are enough to model OpenStack infrastructure? If so what information do they need to contain and what behaviour should they have? Are there other things we need to model? Service Types? Hardware Types? Failure Domains? Generic grouping?

In this session we will explore ideas around TripleO Tuskar infrastructure modelling, assessing the suitability of the current model and look at ways to improve upon it.

The overall aim of the session is to gain a consensus for a sensible infrastructure model for version 1 of TripleO Tuskar.

Discussion points:

Resource Classes
Why are these valuable?
Is 1-many mapping from services to hardware sufficient for v1?
Should we support combined services in single resource classes?
L2Groups
What value does an L2 Group bring to TripleO Tuskar?
Is a model based on a layer 2 network a sensible approach?
If so what information do we need to store and how do we utilize these?
Services
How do we manage different services across OpenStack infrastructure?
Who is responsible for upgrades? Migrating of services? Gracefully shutting down services?
Is this part of a resource class or do we need separate model for this?
Do we even need to support this in v1?
Should we group OpenStack services together to create new ones?
To allow co-location of services for things like Compute and Block Storage performance?
Is this in the scope of V1?
Other Components
Are there scenarios in which the above concepts are not sufficient?
If so are these the edge case?
Do we want to support these scenarios in v1 of TripleO Tuskar?
Should we directly model hardware, Rack, Chassis, Power Supplies and so on?
Should we be more flexible in our modelling approach?
Do we want to offer a more generic approach altogether to modelling infrastructure?
How do we stop this becoming overly complex and difficult to use?
Do we shoot for simplicity (and be more opinionated?)


[1] http://people.redhat.com/~jcoufal/openstack/tuskar/2013-10-16_tuskar_resource_class_creation_wireframes.pdf

(Session proposed by Martyn Taylor)


Tuesday November 5, 2013 3:40pm - 4:20pm
AWE Level 2, Room 203

Attendees (89)