Loading…
Please note: This schedule is for OpenStack Active Technical Contributors participating in the Icehouse Design Summit sessions in Hong Kong. These are working sessions to determine the roadmap of the Icehouse release and make decisions across the project. To see the full OpenStack Summit schedule, including presentations, panels and workshops, go to http://openstacksummitnovember2013.sched.org.

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Swift [clear filter]
Thursday, November 7
 

9:00am

Storage Policies (and other things)
The combination of the DiskFile and DBBroker refactorings and the erasure code work will allow Swift to support multiple storage policies in the same cluster. Regions, tiers, different encodings, and the combination of all of these are possible.

We'll discuss the current work being done on storage policies.

While I'll focus on the storage policy work, I also want to cover some of the topics discussed at the recent Swift hackathon such as cluster federation, discoverable features, and s-sync.

(Session proposed by John Dickinson)


Thursday November 7, 2013 9:00am - 9:40am
AWE Level 2, Room 201C

9:50am

Swift Profiling Middleware and Tools
Swift Performance tuning is a big topic. The current Swift profiling method monitor the cluster and provide system and process level statistics information which can tell how well the system is running. But it doesn't provide more details about code level information and explain why. This data is needed to enable understanding and improving the performance of the Swift core code. This proposal can give us deeper analysis into the code and tell what happened in the background and explain why. For example:
How often the function in specific module is called?
How long it take to execute these calls?
Where's the most time consumed?
Why does the response time of container PUT operation increase?
Where does the memory leaking happen? how much memory consumed by specific code snippet?

The middleware being discussed has been proposed at https://review.openstack.org/#/c/53270/

(Session proposed by Edward)


Thursday November 7, 2013 9:50am - 10:30am
AWE Level 2, Room 201C

11:00am

Swift Drive Workloads and Kinetic Open Storage
Follow up to the "Swift swift-bench workload analysis" at the previous design summit.
- Further observations of drive workloads under swift-bench
- Changes to the drive interface: going from block storage to a key/value API
- Overview of the Kinetic device and library APIs
- Seagate's Kinetic prototype Swift cluster
- Observations of Kinetic drive workloads

(Session proposed by Tim Feldman)


Thursday November 7, 2013 11:00am - 11:40am
AWE Level 2, Room 201C

11:50am

Plugging in Backend Swift Services
With the DiskFile abstraction in place for Icehouse development it becomes much easier to create extensions for the Swift Object Server and service requests to alternative storage systems.

To continue and expand on this work I'd like to briefly share my experience building on DiskFile and an alternative object replicator implementation for Swift on ethernet connected key-value drives [1]. Specifically what I discovered about where other storage implementations do not have to overlap with the existing filesystem/rsync implementation.

I'd like to engage the Swift community and seek input on how we can further divide the Swift backend storage system orchestration from implementation and discover emergent generalizations [2] that may be applicable to abstractions in to the Swift consistency engine as a whole.

The goal of the session is set of blueprints that can be refined, accepted, and developed during the Icehouse cycle.

1. Before the session, the kinetic-swift development codebase (which you can run on the Seagate Kinetic-Preview Simulator), will be publicly available as a reference to this work.
2. I'd call out gholt's ssync (https://review.openstack.org/#/c/44115/) specifically as another alternative implementation for the replicators that should inform how we should think about plugging in to backend Swift processes.

(Session proposed by Clay Gerrard)


Thursday November 7, 2013 11:50am - 12:30pm
AWE Level 2, Room 201C

1:50pm

Swift operation experiences with hot contents
In KT, we had some experience with swift for the hot contents download situation include
- interacting with the CDN as an origin server
- controlling the buffer cache in the object server
We want to share the experience and suggest some features to improve the performance in such situation.

(Session proposed by Hodong Hwang)


Thursday November 7, 2013 1:50pm - 2:30pm
AWE Level 2, Room 201C

2:40pm

Supporting a global online messaging service
Such as WhatsApp, WeChat, LINE, they serve not only text but also MMS, images, videos, doc, etc.

I think, Swift would fit perfectly for those services, but these kinds of services requirements may differ from VM image serving or similar ones.

For example,
- How Swift(may be Keystone) can support about 1 billion user or permission?

- Most of the objects they serves are not big.

- Support multi-IDC deployments knowhow - object/container/account server?

- Most of objects will be consumed by 1 or small people group.

For those or more requirements, I'd like to discuss about how to design zones&partitions, deploy object/container/account servers & setup replicator & auditor, etc.


(Session proposed by Iryoung Jeong)


Thursday November 7, 2013 2:40pm - 3:20pm
AWE Level 2, Room 201C

3:30pm

Making Swift More Robust to Handling Failure
I would like to take some time to focus on some areas that could make swift more robust to failure scenarios. This will may include discussions about (but not limited to):

1. Better error limiting. The error limiting code has gotten a bit stale, and could use an audit and cleaning up. On top of that, the level of audit is at the worker level, so if you have a machine with many workers, it can take a while before a node gets completely error limited. It might be useful to have a local cache that is shared across the workers.

2. Early return on writes. Currently, the elapsed time for a write will be the slowest of the 3 replica writes. In the case that you have a badly behaving node, can cause a lot of issues. We should be able to return to the user as soon as 2 replicas have been successfully written

3. Async fsync. It might be useful to have an optional setting that would allow an object server to return immediately upon the completion of the write, and issue the fsync asynchronously. This of course comes at a risk, but I would like to discuss ways to possibly mitigate this.

There are other smaller things as well, and I would be curious to hear other ideas how we can make swift more robust.

(Session proposed by creiht)


Thursday November 7, 2013 3:30pm - 4:10pm
AWE Level 2, Room 201C

4:30pm

Metadata Search
We at HP Storage have created an Object Storage Metadata Search (OSMS) REST API that we would like to present to the Swift community for an open discussion. We have started an implementation that uses HP StoreAll's existing Express Query NoSQL database to store account, container, and object metadata for subsequent complex query functionality. It is similar in intent to SoftLayer's "API Operations for Search Services" (http://sldn.softlayer.com/article/API-Operations-Search-Services) but more feature-rich. It does not involve any changes to OpenStack code except for Swift middleware. We are interested in leading a pure OpenStack specification and implementation, as a standard extension to Swift. We're encouraging any community effort to move this forward, including SoftLayer developers. We have no reference implementation nor design that is pure OpenStack yet. We are just getting the conversation started. We will present the API we've designed so far, as a strawman for discussion, extending the base Swift REST API (http://docs.openstack.org/api/openstack-object-storage/1.0/content).


(Session proposed by Lincoln Thomas)


Thursday November 7, 2013 4:30pm - 5:10pm
AWE Level 2, Room 201C

5:20pm

Why is Swift's replica count configurable?
So you can change it! 4 replica clusters are the bee's knees - or are they?

In preparation for this session I will simulate failures on multi-node Swift deployments running 2, 3 and 4 replica rings and see where things break. I'd like to crunch some numbers and get laughed at for my naive arithmetic while attempting to explain how Swift's quorum calculation affect durability and availability when using different replica counts.

The goal of this session would be a public documentation of the current expected behavior which we can point to in defense when triaging bugs like "I have a two disk, two replica swift and if I unplug it I can't upload my lolcats".

I'm also open to the possibility of learning where it might make sense to propose changes (or a more tunable configuration - if there is no single obviously correct behavior) for valid use-cases that the Swift community *wants* to support. So I'll need you to come prepared to tell me what we think is reasonable.

(Session proposed by Clay Gerrard)


Thursday November 7, 2013 5:20pm - 6:00pm
AWE Level 2, Room 201C