Industry chit chat :
(client side : pen and paper all workflows)
(Server side : ask finance dept what is tolerance for double orders etc. CRM workflows need to be codesigned at this time. How many errors per million opportunities. If error smaller than X, brokerage sucks it up, if larger than Y, CRM process triggers, etc. Standardised risk reporting framework and governance SOP)
Basically all customer orders need to go into a queue.
CRM and UIX has to cover scenario where the queue somehow missed or lost their order. Also when the order fails. All types of orders. Common types, limit, market, stop-limit, blah blah. Depending on brokerage feature surfaceDatabase cannot be multimaster if tolerance for error is too low. Algorithms get complicated if you try to tolerate more error and turn on multimaster. Hence RMD needs their statistical models to be very tight. So if RMD is sane, RMD will ask for single master. Sorry, you can multimaster also ... but then your user has to wait for the masters to sync till consensusNow all your users get notified when their order goes into queue. It is soft real time, not hard real time. They get notified when order is fulfilled or finally cancelled. Arguably not even real time. Haha
Load testing : Non-trivial. You have to model an entire stock market session. Say even if it's 3 minutes, 10x your current userbase, surge pattern, etc. Your business analysts need to come up with that model. Your techs just need to implement it. Eer not really expensive. Just need BA to have head screwed on right. Otherwise you're testing a fake usage pattern.
I think key requirement specifications will be :
- 1. minimum orders-per-second-per asset ( you can scale-out across assets, but only scale-up per asset) ... 10^(1/2/3/4/5?)
- maximum milliseconds-per-order-processed
- maximum % errors of orders processed per ( second / minute / hour); triggers circuit breaker which stops trading
- maximum $ of errors processed per ( second / minute / hour ) ; triggers breaker