Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Ceilometer [clear filter]
Tuesday, November 5

4:40pm HKT

improvement of central agent
It has been discussed to merge the ceilometer hardware agent into the central agent during the last weeks' IRC meetings. But there are still some issues in the current central agent we need to address:

- How to horizontally scale out the central agent?
Currently, there is only one central agent allowed in the ceilometer deployment, otherwise there will be much duplication of the metering/monitoring samples. We need to figure out how to deploy several central agents without such duplication.

We may possibly follow the current alarm partition way, i.e. one dynamically elected master central agent will distribute the resources among all the central agents for them to poll.

One thing to be noticed is that unlike the alarm service which have only 1 type of resource - alarms, central agent will have different types of resources for different pollsters, i.e. glance resources, swift resources, hardware resources, etc. The distribution process must take that into account to avoid distribute the resources to other central agent which is not configured to have the relevant pollsters for that type.

- Do we want to allow the admin to manually configure what resources to be polled in the pipeline configuration file?
Besides getting all the available resources from a 3rd party API endpoint, i.e. glance/swift/neutron API, I think it'd better also allow the admin to manually configure the resources in the pipeline file, by adding a new 'resources' item into the pipeline file, just like what the hardware agent does now. The gives the admin much more flexibility.

(Session proposed by Lianhao Lu)

Tuesday November 5, 2013 4:40pm - 5:20pm HKT
AWE Level 2, Room 203

5:30pm HKT

Expose hardware sensor (IPMI) data
This session will include the following subject(s):

Expose hardware sensor (IPMI) data:

Ceilometer would like to monitor hardware that can not have a local agent (eg., nova-baremetal and ironic tenants).

Since Ironic already requires the IPMI credentials for the hardware it manages, it seems logical for Ironic to expose a mechanism for polling hardware sensor data via IPMI. This API would then be consumed by Ceilometer (and possibly by other services as well).

Let's get the Ceilometer and Ironic teams together to talk about this!

(Session proposed by Devananda)

Canary monitoring integration:

We wrote and maintain Canary (https://github.com/gridcentric/canary), a tool for monitoring OpenStack physical infrastructure. Given that Ceilometer is now taking on some of those responsibilities, we'd like to contribute code and ideas implemented in Canary into Ceilometer in order to have a single, powerful project for OpenStack monitoring.

This session could serve as both an opportunity to discuss different approaches to physical host monitoring (including specific design decisions that we made that were different from Ceilometer) and flesh out a path for implementing and integrating any relevant ideas from Canary.

(Session proposed by Adin Scannell)

Tuesday November 5, 2013 5:30pm - 6:10pm HKT
AWE Level 2, Room 203
Wednesday, November 6

11:15am HKT

Big Data processing
This session will include the following subject(s):

Big Data processing:

Large deployments may produce huge volume of stats. To see it graphically you may play with http://docs.openstack.org/developer/ceilometer/install/dbreco.html: if there are 20000 instances and probe happens once a minute we will get 1TB per month. Besides, some of metering is better to be probed more frequently and it will increase amount of stored data. At such scale we need to use BigData technologies and Apache Hadoop is one of them. It can work with all backends from Ceilometer: with SQL and DB2 using Apache Sqoop, with Mongo using Hadoop-Mongo connector and HBase. At this session I will share my team's experience in solving metering problems using HBase and Hadoop.

(Session proposed by Nadya Privalova)

Scaling Ceilometer:

Certain meters need to be able to check and provided data on the health and status of workloads, storage, network, etc. nearer to real-time in order for automated (or manual) notifications to be generated against policies to fix "problems" as they occur. Currently, Ceilometer defaults to collect meters at 10minute intervals but it is foreseeable that there are meters that need to be captured at a much finer granularity (in the milliseconds). Additionally, Ceilometer continues to support/record more and more data.

This design session is to discuss how Ceilometer scales to handle increasing amount of data captured by increasing amount of items it meters/monitors/alarms. How do we design/implement Ceilometer to handle various stress levels on both the collector and backend (and what are the limits Ceilometer is expected to handle).

Ceilometer has the ability to deploy multiple collectors horizontally... is that enough and is there a better way? Also, what are the limits to what Ceilometer should collect (ie. collecting logging information will increase data load dramatically).

(Session proposed by gordon chung)

Wednesday November 6, 2013 11:15am - 11:55am HKT
AWE Level 2, Room 203

12:05pm HKT

PaaS Event Usage Collection
This session will include the following subject(s):

PaaS Event Usage Collection:

Ceilometer must be able to efficiently extract and process usage information from services in order to properly generate bills. In order to be able to efficiently process usage data, ceilometer requires services to implement a consistent interface. This document describes the metering requirements, use cases and provides a architecture for collection of usage information from Platform as a Service (PaaS) offerings.

(Session proposed by Phil Neal)

PaaS Event Format for Ceilometer Collection :

Up until recently the focus of OpenStack has been on infrastructure level services (compute, storage, networking), but there are a large number of PaaS services that are currently in development and also a growing number of applications running on OpenStack systems. Many of these new applications will need to be metered for a variety of reasons: internal systems management, license management, end customer billing. Ceilometer currently supports collection of metering information for infrastructure level services. The general mechanism has been to either create a targeted agent or use the existing service APIs to meter these systems. While this approach has worked for the existing set of services, it will run into a several problems as the number of PaaS and SaaS services that need to be metered expand. The main issue is that it will not be viable to do custom integrations for the potentially hundreds (and eventually thousands) of services that may be hosted in OpenStack. This blueprint provides a proposal for standardizing the format of integrations so that Ceilometer can consume metering data from any conforming application without doing any custom integration or even a code change.

(Session proposed by Phil Neal)

Wednesday November 6, 2013 12:05pm - 12:45pm HKT
AWE Level 2, Room 203

2:00pm HKT

Extended stats filtering
This session will include the following subject(s):

Extended stats filtering:

Nowadays user can get 4 statistics for a given metric: max, min, avg and sum. Generally speaking, there are a lot more of potentially interesting aggregates to be calculated for the given metrics. As the first step I would like to propose to add filtering capabilities for existing statistics. This feature would provide user with ability to provide criteria to select the events that would influence counters. Here are few examples of the use cases:
1. Ceilometer may provide statistics about instances booted from the concrete image
2. Statistics for instances colocated in subnet

Also It would be good if user will be able to dynamically create such filters and combine several metrics together.

(Session proposed by Nadya Privalova)

Improving Ceilometer API query filtering:

The current version of Ceilometer API supports a simple query filtering option with a logical AND relation between the fields. This solution results in multiple calls towards the API, when there is a list of requested elements (samples or statistics). The API can be improved to support lists in the fields of a query filter.

This improvement makes it possible to retrieve the samples of one or more meters for one or more resources in one API call and also supports to get the statistics of multiple resources with a single API request.

(Session proposed by Ildiko Vancsa)

Wednesday November 6, 2013 2:00pm - 2:40pm HKT
AWE Level 2, Room 203

2:50pm HKT

A fully-realized query API
This session will include the following subject(s):

A fully-realized query API:

Ceilometer currently supports a limited amount of functionality when querying via the API. We can filter timestamps based on a range and we can search other attributes based on equivalence. This will not be sufficient as the metadata contains more and more valuable information. Also, with the addition of Events, we will need to expand the API support to enable filtering there as well.

how do we create an interface that allows unrestricted filtering of data and can it be backend-agnostic.

(Session proposed by gordon chung)

API improvements:

A grab of topics related to potential improvements in the API:

* support a wider range of statistical aggregates (e.g. standard deviation, rate of change, moving window averages)

* selectable aggregation functions on statistics query

* complete pagination support started in Havana

* can we continue to evolve the v2 API, or do we need to start considering rev'ing it again to v3?

This session is intended as a placeholder for all API-related discussion, so please feel free to add further ideas.

(Session proposed by Eoghan Glynn)

Wednesday November 6, 2013 2:50pm - 3:30pm HKT
AWE Level 2, Room 203

3:40pm HKT

Roll-up of sample data
Currently ceilometer supports only the two extremes of data retention:

* keep everything for always (the default behavior)
* discard data older than a configured age (via the database.time_to_live option)

However there are many usecases where while fine-grained data doesn't need to persist forever, we also don't want older data to suddenly fall off a cliff when it gets to a certain age.

In these cases, a roll-up scheme would be more convenient, such these data as they age are distilled into progressively coarser grained aggregates before being eventually completely expired. As such these data would become gradually more condensed without losing all of their value for trend analysis, before finally being aged out altogether.

This feature would be driven by configured policies, so that for example certain meters are retained forever, whereas others are aggressively rolled up with data older than say one week being aggregated hourly, data older than month being aggregated daily, and so on.

This session will explore the feasibility of implementing such a scheme across the various storage drivers.

(Session proposed by Eoghan Glynn)

Wednesday November 6, 2013 3:40pm - 4:20pm HKT
AWE Level 2, Room 203

4:40pm HKT

Future of alarming
For Havana, alarm evaluation is based on a service at arms-length from the metering pipeline, which retrospectively polls the statistics API over the configured evaluation window with the query constraints for each alarm rule.

There has been some discussion about moving the alarm evaluation logic into the sample/event ingestion pipeline.

The purpose of this session is to gather together the stake-holders in the alarming feature and trash through the potential benefits and pit-falls of such an approach.

(Session proposed by Eoghan Glynn)

Wednesday November 6, 2013 4:40pm - 5:20pm HKT
AWE Level 2, Room 203

5:30pm HKT

Tighten model
Ceilometer does not enforce any kind of relationship on the model it provides in the database drivers.
But it does so in the API for example.

Let's clean that

(Session proposed by Julien Danjou)

Wednesday November 6, 2013 5:30pm - 6:10pm HKT
AWE Level 2, Room 203
Thursday, November 7

9:00am HKT

Feature parity in storage drivers
Our storage drivers have drifted apart as new features have been added. We should settle on one preferred driver that will always be maintained as feature complete, and then allow other drivers to lag as needed based on implementation. We also need a good way to document exactly which features are and are not supported, from the perspective of an API user.

(Session proposed by Doug Hellmann)

Thursday November 7, 2013 9:00am - 9:40am HKT
AWE Level 2, Room 203

9:50am HKT

Ceilometer integration testing
I'd like to propose the discussion about strategies of Ceilometer's integration testing.

Here is a short list of topics:
1. Tempest integration
2. Devstack + Ceilometer testing

We're working to define more detailed agenda, and appreciate any suggestions. I just wanted to remind PTL about this topic :)

(Session proposed by Nadya Privalova)

Thursday November 7, 2013 9:50am - 10:30am HKT
AWE Level 2, Room 203