At the SIPfoundry CoLab users conference members of the sipXecs team present an architecture overview and explain why sipXecs is optimized for cloud production.
11. 11 Testing Procedure
>Manual smoke test
>Basic tests that catch major issues
>Manual sanity test
>Detailed tests for each feature
>Manual regression tests
>Complex tests for features that are added/changed
>Automated Load Tests
>Deployment for a week on our Production System
>Dog fooding…
12. Automated Load Tests
sipxtest 12
>Placing and receiving calls is the core feature and we want this to be stable
>Basic testing cannot predict
>How a server behaves over time
>How a server behaves under stress
>Call Load tests helps address both problems
>Allows determination the performance of a Certain Server
>All servers are different (physical, virtual)
>Allows for determining how well sipXecs scales
13. sipxtest - Architecture 13
>Simple install ‘yum install
sipxtest’
>Pink – Files or Commands that test
user can interact with.
>Yellow – Generated files (you can
edit these files, but know that
sipxtest changes overwrite edits)
14. Load Test Numbers
What do we do as part of build testing? 14
>3 days of load testing for all major builds
>15 calls per second
>4 million calls total
15. sipXecs 4.6 Status 15
>Running in house on production system since end of July 2012
>Controlled release since August 2012
>GA December 1, 2013
>Update 1, February 5
>Polycom Firmware Updates, New iptables capabilities, bug fixes.
>Update 2, February 6 (small revert)
>Update 3, March 13
>fail2ban, bug fixes.
16. Roadmap – Near Term 16
End of Q1 to End of Q2
>openACD w/Supervisor & Agent Portals
>Multiple Level Administrator
>Multiple Time Zone
>Polycom VVX 300/400 Support
>sipXsbc
>Session State Services – SSS (clean up RLS / XMPP link)
>Improvements to HA (get rid of odd # of server requirement)
>Call Queuing
>Unite 2.0
17. Roadmap – Longer Term 17
>openACD Reporting
>Branch Office Solution
>Will likely involve looking at User & System management differenly (i.e., more like a directory
structure).
>User Portal re-write
>Browser based client, WebRTC. Zero Install Communications Solution.
>New Admin GUI
>Time to modernize a bit. The old interface is efficient but dated.
19. 19
>What is different as compared to traditional architectures?
>What makes sipXecs an IT application?
>High-level intro to sipXecs architecture (diagram)
>Hardware independence: What does this mean?
>Resulting deployment options: Focus on flexibility, global scale, redundancy
>Redundancy, branch redundancy
>Focus on our ‘secret sauce’. What makes this architecture better than all the
others?
20. 20 Status of the 4.6 Release
>What is new?
>Experience with 4.6 in the field
>Test results and test methodology
25. 3:00-4:00 sipXecs Architecture Moderator: Mike
Participants: Douglas, Daniel, Joegen, Ciprian
Engineering provided content:
•Architecture overview (Mongo, SIP, XMPP, CFEngine high-level arch diagram).
•Features and improvements delivered with 4.6
•Test automation (how do we test?)
•Status of 4.6
•Deployment examples (distributed, virtualized, redundancy)
•Roadmap – what to come next?
Notas do Editor
I'm going to show you how mongo is integration into this open source project.
Along the way I'm also going showing features of this project. I think it's really good project, I'm really proud of it and it's hard to think of a company that could not use it.
Each box represents a running daemon each arrow is a network connection.
This architecture is of course highly scalable and fairly unique to communications systems, open source and otherwise.
What makes this system unique is that this entire system
can be installed in minutes
span multiple machines
includes monitoring, security, backup, restore.
highly customizable, partly due to architecture
Hyper-focused on call redundancy at server level.
When replication broke "split brain" - half calls went to voicemail
Lot's of custom replication logic
Recovery took a while with a lot of data and long down times.
Much cleaner. We'll be getting rid of postgres eventually.
publisher / watcher means that multiple apps could receive same event
dealer / worker means that only one app can receive and consume the event