Loading…
Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Neutron [clear filter]
Tuesday, November 5
 

11:15am

Neutron Development Review and Icehouse Policies
This session will include the following subject(s):

Neutron Development Review and Icehouse Policies:

In this session, we will look back on the Havana development cycle and discuss new policy changes for the Icehouse cycle. All plugin and driver contributors are encouraged to have a representative at this session.

(Session proposed by Mark McClain)

CLI: Vendor Specific Command Extensions:

Several vendors have created extensions to support features unique to their plugin. In this session, we'll discuss how to organize the CLI commands to allows users to

(Session proposed by Mark McClain)


Tuesday November 5, 2013 11:15am - 11:55am
AWE Level 2, Room 201C

12:05pm

Achieving Parity with Nova-Networking
This session discusses how to close the gap with nova-networking so it can be safely deprecated. Besides feature gaps, this session discusses disparity in testing coverage as well as performance issues, usability and documentation.

(Session proposed by Brent Eagles)


Tuesday November 5, 2013 12:05pm - 12:45pm
AWE Level 2, Room 201C

2:00pm

Neutron VPN-as-a-Service
In this session, we are going to discuss workitems and extension of VPNaaS in Icehouse. We will discuss extending support for MPLS, SSL, and possibly IPSec+L2TP. Additionally, vendor drivers will be covered.


(Session proposed by Nachi Ueno)


Tuesday November 5, 2013 2:00pm - 2:40pm
AWE Level 2, Room 201C

2:50pm

Neutron Firewall as a Service (FWaaS)
The FWaaS feature was introduced in the H release with basic functionality. There are a number of complementary/advanced features and work items which were either planned earlier and/or have evolved from the experience after using this initial version. We will discuss these topics during this session.


(Session proposed by Sumit Naiksatam)


Tuesday November 5, 2013 2:50pm - 3:30pm
AWE Level 2, Room 201C

3:40pm

Neutron API Framework Replacement
This session will include the following subject(s):

Neutron Plugin Interface:

The current Neutron plugin interface has accumulated lots of layers and mixins. This session will explore a proposed v3 API interface and the migration path for plugins currently supporting the v2 plugin API.

(Session proposed by Mark McClain)

Neutron API Framework Replacement:

This session will explore the steps necessary to replace Neutron's home grown API framework with the Pecan WSGI Framework. Pecan is used by Ceilometer and Cinder.

(Session proposed by Mark McClain)


Tuesday November 5, 2013 3:40pm - 4:20pm
AWE Level 2, Room 201C

4:40pm

Framework for Advanced Services in VMs
Discuss the requirements for a common framework to enable the deployment of advanced networking services that are implemented in virtual machines. See related etherpad here: https://etherpad.openstack.org/p/NeutronServiceVM


(Session proposed by Greg Regnier)


Tuesday November 5, 2013 4:40pm - 5:20pm
AWE Level 2, Room 201C

5:30pm

API Extensions for Drivers
This will be a split session to discuss extensions for drivers.

This session will include the following subject(s):

ML2: How to support driver-specific extensions:

In this topics, how to support driver-specific features will be discussed. Various ML2 mechanism drivers are implemented and proposed, and features provided by them may be different. It is one of good topics how to support such extensions in ML2 plugin/drivers.

It is also important from the point of migrating from monolithic plugin to ML2 driver. In the existing monolithic core plugins, support of extensions are different across plugins and some plugins have plugin-specific extensions. They would like to continue to support their extensions even after migrating ML2 plugin.

If an extension introduces a new resource, service plugin is an option. Is it a direction we have service plugins per extension? Generally speaking it seems not a good idea to me. If an extension add a new attribute to an existing resource, what can we do? Should we allow mechanism drivers to define additional extensions list?

ML2 plugin has several merits (potentially support multiple backend, avoiding code duplication across core plugins, ...), so I believe it is worth discussed.

I am now looking ML2 plugin code and looking for possible ways.

(Session proposed by Akihiro Motoki)

Extensible API: deal with growing services:

Neutron has added 3 new services in addition to LBaaS during Havana cycle: FWaaS, VPNaaS, Metering.
More services to follow (possibly).

All these services are implemented as API extensions at this point, but as amount of features and diversity will grow, it will create a need for 'extensions for extensions'.

We need to decide how/what to make part of Core API and propose a way to extend Core Service API with vendor-specific features

(Session proposed by Eugene Nikanorov)


Tuesday November 5, 2013 5:30pm - 6:10pm
AWE Level 2, Room 201C
 
Wednesday, November 6
 

11:15am

Neutron QA and Testing
During Havana we made some substantial progress with Neutron in the gate, however it's still not quite at the same level in the gate as other projects. This session will be a joint QA/Neutron session (held in the Neutron track) to address some of these issues. These include:

Full tempest gating - what stands between us and being able to remove the limitation that neutron can only run the smoke jobs.

Parallel testing support - what stands between us and being able to run tests parallel against neutron environments.

Upgrade testing - what are the plans for getting neutron into grenade's configuration and upgrade testing.

(Session proposed by Sean Dague)


Wednesday November 6, 2013 11:15am - 11:55am
AWE Level 2, Room 201C

12:05pm

Neutron Loadbalancing service (LBaaS)
This session will include the following subject(s):

Icehouse – topics for LBaaS:

a summary of all relevant discussions around LBaaS at: http://goo.gl/QvINBh

In addition to the proposals the following should be addressed on the "generic" API level:

L7 rules - https://blueprints.launchpad.net/neutron/+spec/lbaas-l7-rules

Also, using the extension proposal by Eugene, we would like to implement "template" based declarative mechanism - https://blueprints.launchpad.net/neutron/+spec/lbaas-template-engine

(Session proposed by Samuel Bercovici)

Neutron Loadbalancing service (LBaaS):

We will discuss feature roadmap for Icehouse.
Key points:
- object model change (vip and pool relationship), loadbalancer instance.
- extensible API/capabilities/vendor plugin drivers
- the concept of noop-driver/"backendless" configuration


Also we will cover vendor driver contribution.
Corresponding etherpad link: https://etherpad.openstack.org/icehouse-neutron-lbaas

(Session proposed by Eugene Nikanorov)


Wednesday November 6, 2013 12:05pm - 12:45pm
AWE Level 2, Room 201C

2:00pm

Mini-Sessions: Gateway Port Forwarding and Switch Port Ext
This session will include the following subject(s):

Neutron Switch Port Extension:

For services hosted in virtual machines or in namespaces, their interface plugs into a Neutron port which at most times maps to a virtual switch port (such as OVS port). Neutron is aware of this port and as such can establish the required network connectivity for that interface and service. However, when a service that is realized on a physical appliance is connected to a physical switch's port, it is often not possible to discover the location of this service to be able to provide the required network connectivity. We will discuss an extension to the port resource which can help solve this issue.

(Session proposed by Sumit Naiksatam)

Add port forwarding from gateway to internal hosts:

This BP virtually intends to implement a DNAT on routers. It enables outside to access internal hosts via different port on gateway IP.

This feature is important for running openstack as IaaS. It is important for a public cloud user to create a router and apply port forwarding and firewall rules to his/her internal network.

(Session proposed by Jianing YANG)


Wednesday November 6, 2013 2:00pm - 2:40pm
AWE Level 2, Room 201C

2:50pm

Neutron Pain Points
A survey of some of the usability and development challenges faced by Neutron to encourage consensus on how such issues might be best addressed.

Shortlist of suggested issues to discuss:

Usability

Application developers whose primary concern is logical connectivity may complain that the current set of Neutron primitives (Network, Subnet, Port) are confusing. Network engineers may complain that Neutron does an insufficient job of exposing networking primitives. How can Neutron provide better usability for these types of users?


Reliability

Neutron is becoming more feature-full with each release but quality is suffering. What strategies can we employ to reverse this trend?


Complexity

The complexity of Neutron has been increasing as more and more control-plane functionality is included. Should this trend be allowed to continue, or should the role of Neutron be reduced to providing glue between OpenStack and 3rd party networking options?

(Session proposed by Maru Newby)


Wednesday November 6, 2013 2:50pm - 3:30pm
AWE Level 2, Room 201C

3:40pm

L3 advanced features
The aim of this session is to discuss the following topics related to L3 features supported by Neutron plugins.

1) Dynamic Routing
2) Support for multiple external gateways

Dynamic Routing
------------------
In a nutshell: so far Neutron L3 implementations only leverage static routes; dynamic routing should be a potentially desirable new feature.

Scenarios:
A) dynamic routing between Neutron routers
B) dynamic routing between Neutron routers and external systems

The goals of the session would be:
1) Reach a consensus over whether this something that should be targeted to Icehouse release, and which scenarios should be supported
2) Decide which routing protocols should be supported and in which scenarios (e.g. OSPF and RIPv2 for internal routing, BGP for external routing)
3) Identify the APIs Neutron should expose for configuring external routing
4) Provide a high level roadmap for support in the l3 agent and identify potential bottlenecks


Multiple External Gateways
--------------------------
In a nutshell: Extend the L3 API to allow for setting multiple gateways on a Neutron router

Typical scenario: http://en.wikipedia.org/wiki/Equal-cost_multi-path_routing

During the session the community should agree on whether this is something which make sense for Neutron routers, and if positive, discuss a roadmap for API changes and support in the l3 agent.

***********************************
- Blueprint links TBA

(Session proposed by Salvatore Orlando)


Wednesday November 6, 2013 3:40pm - 4:20pm
AWE Level 2, Room 201C

4:40pm

Extensibility of Modular Layer 2
This session will include the following subject(s):

ML2: More extensible TypeDrivers:

The current ML2 TypeDrivers assume that ML2 is managing segmentation types via the TypeDrivers. What we've noticed while developing MechanismDrivers for an array of SDN Controllers is the fact they want to control the segmentation type and perhaps even the segmentation ID. The current ML2 TypeManager assumes an integer for a segmentation ID, but a controller MechanismDriver may want to allocate something other than an integer. Additionally, controller-based TypeDrivers may require interactions with the controller for allocating and de-allocating segmentation IDs. A good discussion around this use case for ML2 would be great to have.

(Session proposed by Kyle Mestery)

ML2: RPC handling in ML2:

Currently, RPC handling is done at a higher layer in the ML2 plugin. This works great for the existing RPC calls supplied and consumed by the OVS, LB, and Hyper-V agents. But we've hit some problems when additional RPC calls are needed by a specific MechanismDriver. There is no clean way to do this currently. Additionally, it's conceivable a controller-based SDN MechanismDriver may want to listen for RPC calls. The current infrastructure doesn't allow this.

(Session proposed by Kyle Mestery)


Wednesday November 6, 2013 4:40pm - 5:20pm
AWE Level 2, Room 201C

5:30pm

Modular Layer 2 QoS and Deprecated Plugin Migration
This session will include the following subject(s):

Quality of Service API extension:

Updates on the current status of a vendor-neutral Quality of Service API extension in Neutron.

(Session proposed by Sean M. Collins)

ML2: Migration from deprecated plugins:

The Open vSwitch and LinuxBridge plugins are planned for deprecation in the Icehouse timeframe. This session will discuss what we as a team would like to do with regards to migrating people from these plugins into the ML2 plugin. Perhaps we want to automate this as much as possible.


(Session proposed by Kyle Mestery)


Wednesday November 6, 2013 5:30pm - 6:10pm
AWE Level 2, Room 201C
 
Friday, November 8
 

9:00am

Resource Management
In this split session we'll be discussing scheduling and resource management.

This session will include the following subject(s):

Service VMs & HW devices, scheduling and agents:

This session is proposed to discuss how virtual or physical appliances can better be supported in Neutron. Focus is on scheduling, network plugging, and evolution of agents.

These are pretty much all topics that could be covered in discussions of framework for service VMs. Hence, this proposal could be merged with other session proposals to get a workable session schedule.

(Session proposed by Bob Melander)

Dynamic Network Resource Management for Neutron:

DNRM is a framework which adds virtual network appliance support, multivendor network resources, and policy-based scheduling of physical and virtual resource instances. It overlaps several other initiatives, including proposals for service instances. We will be demonstrating a prototype implementation during the Summit. The goal of this session is to transform this blueprint into a work item for Icehouse.

(Session proposed by Geoff Arnold)


Friday November 8, 2013 9:00am - 9:40am
AWE Level 2, Room 201C

9:50am

Neutron based Distributed Virtual Router
The aim of this session is to discuss about implementing a Distributed Virtual Router in Neutron.

Distributed Virtual Routing
---------------------------
Today Neutron supports L3 Routing functionality in the Network Node. So for any Intra Tenant routing, the packets have to flow through the Network Node which creates a single point of failure.
Also it introduces performance issues.

Distributed Virtual Router will provide flexibility to add routers within the compute node and route packets locally instead of passing through the Network Node.

The goals of this session would be:
1. To get consensus from the community on the right model for the Distributed Virtual Router with the proposed one.
2. Provide a roadmap for this feature support.
3. Identify any dependencies.
4. Next steps ( Define and refine any extra API's required, plugins to use either use the existing L3Plugin or write an extension) etc.,


--Blueprint Links TBA



(Session proposed by Swaminathan Vasudevan)


Friday November 8, 2013 9:50am - 10:30am
AWE Level 2, Room 201C

11:00am

Neutron Service Chaining and Insertion
We have three "advanced" services in Neutron today, LBaaS, FWaaS, and VPN. However, there is no API available to the user to express as to which traffic to subject these services to. For instance, a bump-in-the-wire firewall, or a tap service, or a L2 VPN would all require a subnet context. When provided, this context can be used by the provider to appropriately configure the data path and is commonly referred to as service insertion.

Moreover, with more than one service, it becomes relevant to explore the model of how multiple services can be sequenced in a chain. An example, in the context of today’s reference implementations, is the insertion and chaining of firewall and VPN services. Each of these reference implementations rely on the use of IPTable chains to program the relevant filters and policies to achieve their respective purposes. However, in the absence of a chaining abstraction to express the sequence of these services, these implementations act independently and the resulting order of operations is incidental and cannot be controlled.

In this session we will discuss how the above two issues can be solved by enhancing existing abstractions and augmenting with new abstractions in Neutron. The objective will be to support both modes of instantiating services - independently, and as a part of a chain - with support for the former in a non-disruptive fashion (since this is the default mode today).

There was discussion on this topic during the last summit. The proposed session will advance this discussion based on the feedback gained during the past six months (and build on what was added to Neutron in the H release). We will focus on transitioning to the pragmatics of what can be implemented and achieved in the Icehouse timeframe.

Etherpad: https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining

(Session proposed by Sumit Naiksatam)


Friday November 8, 2013 11:00am - 11:40am
AWE Level 2, Room 201C

11:50am

Layer 2 Topics
This session will include the following subject(s):

L2 VPNs as a service:

This is a discussion of L2 VPNs, such as GRE tunnels L2TP, and even VLANs, and how we might implement an API for them in Neutron.

L2 VPNs are different from L3 VPNs in that there is no address on the inside end of the connection, the end which transmits content between the tunnel and the Neutron network. In some encap cases, such as VLANs, there isn't even an IP address on the outside of the tunnel.

There are API models we already use have some parallels to L2 VPNs - both routers and provider networks have some similarities - but neither is a perfect match. Routers typically provide an addressed internal port and provider networks are statically created by config as they relate to the hardware setup of the system.

This session will review a few possible models for how we might describe L2 VPNs with objects and REST APIs, followed by a general discussion.

(Session proposed by Ian Wells)

Gateway extension API proposal:

Abstract:

Neutron already provides an abstract Router API extension for routing between cloud tenants virtual networks. Its main useful purpose is to enable NATing of IP addresses of the unlimited number of VMs to a limited pool of external/public IPv4 addresses. However, routing between virtual networks subnets adds some complexity (at least for the simple tenant abstract API) in automating the mandatory /subnet IP subnet address design of virtual networks belonging to the same tenant, in sharing various L2 services (usually by configuring helper services in routers), and in moving VMs with zero downtime (usually with extra tunneling if not in the same L2).

We propose to add optional Bridging operations to the Router object so that we abstract both Router and Bridge in a Gateway object managed by Cloud Tenants Admins. This will provide a simple REST interface to bridge virtual networks together and with physical networks while the underlying plugin will focus on programmatically controlling the L2 broadcast regardless of all the heterogeneous virtual networking technologies. This broadcast is usually emulated using L2oL3 tunnels overlays between virtual switches if native tagging is not provided, but other schemes could also be used. With this API, we will be able to easily stitch Neutron Networks to benefit from various existent services in enterprise data centers that are not managed by OpenStack:
enterprise DHCP servers, PXE boot software provisioning servers, L2VPN gateways to elastic WANs, to only cite few.



(Session proposed by Racha Ben Ali)


Friday November 8, 2013 11:50am - 12:30pm
AWE Level 2, Room 201C

1:30pm

ML2 SDN Mechanism Drivers and Agents
This session will include the following subject(s):

Neutron+SDN: ML2 MechanismDriver or Plugin?:

The Modular Layer 2 (ML2) core plugin introduced in havana replaces and deprecates the monolithic Open vSwitch and Linux Bridge core plugins. It includes MechanismDrivers supporting both of these plugins' L2 agents as well as the Hyper-V L2 agent. Not only is redundant code eliminated, but these L2 agents can now be combined in heterogeneous environments. Additional MechanismDrivers integrate with various types of switching gear. The ML2 MechanismDriver API is intended to support integration with any type of virtual networking mechanism, and work is under way to integrate at least one SDN controller via ML2.

This session will explore the two approaches for integrating SDN controllers with neutron - as monolithic core plugins and as ML2 MechanismDrivers. Topics to discuss include any current technical obstacles to integrating SDN controllers via ML2, advantages of using ML2, and whether the heterogeneity provided by ML2 is useful with SDN. Hopefully the session will lead to consensus on which approach makes more sense for future SDN integrations, and whether current core plugins supporting SDN controllers should eventually become ML2 MechanismDrivers. It will be of most value if maintainers of existing monolithic plugins and ML2 MechanismDrivers, as well as those considering new ones, can participate.

(Session proposed by Robert Kukura)

Modular [L2] Agent:

We now have a Modular Layer 2 (ML2) core plugin that supports a variety of networking mechanisms. Many of those involve L2 agents. These agents typically have similar structure and code. Can they be replaced by a single modular L2 agent? If so, can/should that agent support other functionality beyond L2 (L3, DHCP, *aaS, ...)?


(Session proposed by Robert Kukura)


Friday November 8, 2013 1:30pm - 2:10pm
AWE Level 2, Room 201C

2:20pm

Neutron L3 Agent improvements
This session will explore how we could improve the Neutron L3 Agent by bringing high availability and stateful connections for external network connections.

The discussion will be about API design, scheduling and which backends to use (keepalived, ipvs, etc).

(Session proposed by Emilien Macchi)


Friday November 8, 2013 2:20pm - 3:00pm
AWE Level 2, Room 201C

3:10pm

Connectivity Group Extension API
This session is to discuss making the Neutron APIs more application friendly. The Connectivity Group abstraction blueprint is a possible means of making this happen, but we'd like to discuss other ideas around this as well.

(Session proposed by Kyle Mestery)


Friday November 8, 2013 3:10pm - 3:50pm
AWE Level 2, Room 201C

4:10pm

Network Mini Sessions
This session will include the following subject(s):

ML2: Multiple Backend Support:

ML2 plugin potentially supports multiple backend technologies. In Havana, we can use a single backend at the same time.

Considering usecases in cloud data centers or NFV, there are cases where one VM want to connect multiple networks with different networok backend. For example, one interface is connected to Linux Bridge with VLAN for a frontend network and another interface is connected to OVS controlled by SDN controllers for internal network. (We already see a similar case in br-int and br-ex on l3-agent.)

To achieve it, the following topics needs to be considered:
- How to specify backend technologies when creating a network (through extension)
- Machanism driver support
- Nova VIF driver determines a bridge connected to a VIF based on information from Neutron (by extending the port binding extension)

It is somthing different from provider:network_type. network_type specified which protocol (VLAN, GRE, VXLAN, ...) is used for a tenant network and it can work on a single backend technology such as Linux Bridge or OVS.

It may not be specific to ML2 topic but I think ML2 is a good start point of this topic.

(Session proposed by Akihiro Motoki)

Improving the Provider API:

So far in Neutron, there has been more focus on the tenant-facing API than the provider's. For Icehouse, we believe that Neutron API should evolve to provide richer capabilities for the providers. In this session, we would like to discuss the following topics:

Provider Router:

Neutron should let the provider and tenants own their own routers, and they should be able to link the routers together. This enables the model of tenants linking their routers to the provider-owned uplink router. This gives the providers more control over the internet/inter-tenant traffic that passes through its edge router.



(Session proposed by Ryu Ishimoto)


Friday November 8, 2013 4:10pm - 4:50pm
AWE Level 2, Room 201C

5:00pm

Neutron Stability
In this session we will examine how to address the race conditions that caused problems during the Havana development cycle.

(Session proposed by Mark McClain)


Friday November 8, 2013 5:00pm - 5:40pm
AWE Level 2, Room 201C