The best thing you can do here is consider Lumira server is just a thin overhead to provides access into HANA. It's not relevant to the sizing discussion - focus on HANA instead.
For HANA you have two numbers to process - 3bn scans/sec/core and 25m aggregations/sec/core for Ivy Bridge (2/16 for Westmere).
So let's say we have a 2S/30c/512GB Ivy Bridge system. This will give you 750m aggs/sec/core. So suppose you have a 750m row result set and you aggregate the whole lot, you can do this once per second (or 3600 queries/h).
If you have a 750m row result set and only aggregate half of it, you could reasonably expect 7200 queries/h. If you are scanning huge amounts but not aggregating, then you need to focus on the result set.
It's worth noting if you have big data, you need a decent amount of cores to get awesome performance. I have been testing 5bn rows on 40 cores of Westmere and if you aggregate all the data in one query, performance suffers. Upgrading to 160 cores fixed that.
It's also worth noting that if you have very complex queries and joins (unlikely if you are using Lumira Server as it is well normalized) them these numbers can drop.
On average size, my advice is to get at last 30-60 cores, you need this many to experience how amazing HANA is. On compression, it varies. Against CSV flat files, we usually see 5-10:1.
On hardware prices, they didn't fall so much, but make sure your procurement department negotiates.
Hope this helps.