ActionCable slow in production - ruby-on-rails

I am building a basic chat application for customer support of a website. It works flawlessly in development on local server. I pushed the changes to the server but it is behaving extremely slow. The application in itself works fast but the pub/sub to acioncable channels is slow.
I am using nginx, puma for webserver and redis for pub/sub. I have four channels and two of them have heavy client side (coffee.erb file).How can I reduce time for actioncable channels? How can I debug what is causing the lag?
Thanking you in advance. If any code is required please mention in the comments of the question and I would add it to the question.

The most common cause for things running way more slowly on a server than locally is because they don't have nearly the same amount of RAM and start swapping. Just like an app being ridiculously slow on older phones.
In one case, the system swaps in and out memory, in the other case, the app swaps in and out resources itself (often implicitly through resource caches provided by the API).
The effect is the same: massive I/O overhead that doesn't exist on your development system / modern phone, leading to runtime behavior that is slower by several orders of magnitude.

Related

Multiple rail apps using Apache and Mongrel

I am actually developing and application that has around 15 modules, all of them using the same database.
I am using Apache + Mongrel, I cannot use Passenger because I am working on Windows (please forgive me for this deadly sin!)
Which of the following is a better approach?
Deploy multiple small rails
applications using a virtual server
and a pair of mongrels for each
application.
Deploy only a big rails application
I am worried about the number of running mongrels and the memory/cpu load.
I'd suggest deploying a monolithic Rails application.
I use the request_routing plugin to drive 3 domains sharing the same database from one, big Rails application.
I'm running 4 mongrels, which seems to be enough for now, but YMMV.
It depends on hwo many simultaneous clients you expect to have. One mongrel, one client at a time (until Rails 2.2) since Rails isn't currently threaded.
Two is enough mongrels if you don't expect more than a few simultaneous users. You can raise that number by using page caching to bypass mongrel for pages that don't have user-specific dynamic content.
The only way to be truly sure is to test the system.
In my experience you'll need at least 4 mongrels for a moderately active site of just a few users at a time.
It would seem like one application would best fit your scenario... as others have said...
A good rule of thumb would be that the average behaving mongrel will consume 60mb of memory (or less)... take your total RAM available, subtract out for any other services (database, memcache, etc) and then figure out how many pieces of the pie you can have left from the remaining memory.
You can always scale them up or down from there...
It sounds like it would be a much better use of your hardware to integrate all modules into one comprehensive rails apps.
IMHO the primary weakness of Rails is the amount of resources needed to run a low or very low traffic app. On the other hand a few mongrels go a long way to serve a whole lot of traffic.

Async App Server versus Multiple Blocking Servers

tl;dr Many Rails apps or one Vertx/Play! app?
I've been having discussions with other members of my team on the pros and cons of using an async app server such as the Play! Framework (built on Netty) versus spinning up multiple instances of a Rails app server.
I know that Netty is asynchronous/non-blocking, meaning during a database query, network request, or something similar an async call will allow the event loop thread to switch from the blocked request to another request ready to be processed/served. This will keep the CPUs busy instead of blocking and waiting.
I'm arguing in favor or using something such as the Play! Framework or Vertx.io, something that is non-blocking... Scalable. My team members, on the other hand, are saying that you can get the same benefit by using multiple instances of a Rails app, which out of the box only comes with one thread and doesn't have true concurrency as do apps on the JVM. They are saying just use enough app instances to match the performance of one Play! application (or however many Play! apps we use), and when a Rails app blocks the OS will switch processes to a different Rails app. In the end, they are saying that the CPUs will be doing the same amount of work and we will get the same performance.
So here are my questions:
Are there any logical fallacies in the arguments above? Would the OS manage the Rails app instances as well as Netty (which also runs on the JVM, which maps threads to cores very well) manages requests in its event loop?
Would the OS be as performant in switching on blocking calls as would something like Netty or Vertx, or even something built on Ruby's own EventMachine?
With enough Rails app instances to match the performance Play! apps, would there be a cost noticeable cost difference in running the servers? If there are no cost difference it wouldn't really matter what method is used, in my opinion. Shoot if it was cheaper financially to run up a million Rails apps than one Play! app I would rather do that.
What are some other benefits to using either of these approaches that I may be failing to ask about?
Both approaches can and have worked. So if switching would incur a high development cost and/or schedule hit then it's probably not worth the effort...yet. Make the switch when the costs become unacceptably high. Think of using microservices as a gradual switching strategy.
If you are early on in your development cycle then making the switch early may make sense. Rewriting is a pain.
Or perhaps you'll never have to switch and rails will work for your use case like a charm. And you've been so successful at making your customers happy that the cash is just rolling in.
Some of the downsides of a blocking single server approach:
Increased memory usage. Sources: multiple processes, memory leaks, lack of shared datastructures (which increases communication costs and brings up consistency issues).
Lack of parallelism. This has two consequences: more boxes and more latency. You'll need potentially a much larger box count to handle the same load. So if you need to scale and have money concerns then this can be a problem. If it isn't a concern then it doesn't matter. In the server it means increased latency, the sort of latency which can't be improved by multiplying processes, which may be a killer argument depending on your app.
Some examples of those who had made such a switch from rails to node.js and golang:
LinkedIn Moved From Rails To Node: 27 Servers Cut And Up To 20x Faster : http://highscalability.com/blog/2012/10/4/linkedin-moved-from-rails-to-node-27-servers-cut-and-up-to-2.html
Why Timehop Chose Go to Replace Our Rails App : https://medium.com/building-timehop/why-timehop-chose-go-to-replace-our-rails-app-2855ea1912d
How We Moved Our API From Ruby to Go and Saved Our Sanity : http://blog.parse.com/learn/how-we-moved-our-api-from-ruby-to-go-and-saved-our-sanity/
How We Went from 30 Servers to 2: Go : http://www.iron.io/blog/2013/03/how-we-went-from-30-servers-to-2-go.html
These posts represent arguments that are probably illustrative of what your group is going through. The decision is unfortunately not an obvious one.
It depends on the nature of what you are building, the nature of your team, the nature of resources, the nature of your skills, the nature of your goals and how you value all the different tradeoffs.
Would costs really drop? Isn't the same amount of computation done no matter the number of servers?
Depends on the type and scale of the work being done. Typically web services are IO bound, waiting on responses from other services like databases, caches, etc.
If you are using a single threaded server the process is blocked on IO a lot so it is doing nothing a lot. In contrast the nonblocking server will be able to handle many many requests while the single threaded server is blocked. You can keep adding processes, but there are only so many processes a single machine can run. A nonblocking server can have the same number of processes while keeping the CPU busy as possible handling requests. It's often possible to handle higher loads on smaller cheaper machines when using nonblocking servers.
If your expected request rate can be handled by an acceptable number of boxes and you don't expect huge spikes then you would be fine with single threaded servers. Nonblocking servers are great at soaking up load spikes without necessarily having to add machines.
If your work is such that response latencies don't really matter then you can get by with fewer nodes.
If your workload is CPU bound then you'll need more boxes anyway because there won't be the same opportunity for parallelism because the servers won't be blocking on IO.

Ruby on Rails server requirements

I use rails for small applications, but I'm not at all an expert. I'm hosting them on a Digital Ocean server with 512MB ram, which seems to be insufficient.
I was wondering what are Ruby on Rails server requirements (in terms of RAM) for a single app.
Besides I can I measure if my server is able to support the number of application on my server?
Many thanks
It depends on how much traffic you think you need to handle. We have two machines (a 32GB RAM, usage see below) with 32 unicorn workers two serve one app with loads of traffic and we have one machine with loads of 2 worker apps that have very few traffic.
We also have to consider the database (which needs the most RAM by far in our case due to big caches we granted it). And on top of that all we have *nix which caches the filesystem in unused RAM.
Conclusion: It is very hard to tell without you telling us what sort of traffic you expect.
Our memory usage on one of the two servers for the big app: https://gist.github.com/2called-chaos/bc2710744374f6e4a8e9b2d8c45b91cf
The output is from a little ruby script I made called unistat: https://gist.github.com/2called-chaos/50fc5412b34aea335fe9

Thin + Nginx Production ready combination for RubyOnRails Application

I have recently installed Nginx + Thin on my deployment server, but i am not sure how this will perform in last requests & responses situation. lets say 1000/req per sec.
so the speed on thin is good with 10-100 req /per sec
I wanted to know on higher volumes of data being processed on the request/response cluster.
Guide me on this :-)
If you have a single server I think that the main key is, apart from everything already mentioned, is don't skimp on the specs of it. Trying to get too much to run on too little is just a recipe for disaster.
It is also a good idea to get monit or God monitoring your thin instances, I started out with God, but it leaked memory pretty bad on Ruby 1.8.6 so I stop using it in favour of monit. Monit is written in C I believe and has a tiny memory footprint so I'd recommend that one.
If all that seems like a bit much to keep nginx and thin playing nicely you may want to look into an all in one solution like Passenger or LiteSpeed. I have very little experience with these so can offer no substancial advice for them.
Multiple thin processes and nginx are capable of providing lots of speed, depending on what your application is doing. So, the problem will be your application code, the speed of your application server, and your database server.
Scaling Rails has been recently covered in depth by the Scaling Rails Screencasts. I recommend you start there. My 5 step program to scaling Rails would be:
First step is to have the tools to look at what is slow in your application. Do not spend time optimizing everything in your application when you don't know what the problem is.
The easiest way to be able to handle lots of requests/second is with page caching.
If you can't do that, cache everything possible (fragment caching, use memcached to cache data, etc), to speed up your application.
After that, optimize your application as best as possible, make SQL queries fast, index everything, etc.
If you still need more speed, throw more hardware at the problem. Get a big, powerful database server, a bunch of app servers, and proxy your requests across them. You can start here, too, but it will only delay the optimization process.

Why are ASP.NET pages so much slower on localhost than on the production server

The title pretty much sums it up, and I'm sure there's a perfectly valid explanation,
but it seems extremly odd that loading pages(after they're compiled) on my local computer seems to take forever, when the same code is blistering fast when "live".
I'm developing on Vista, IIS7, pretty ok hardware; while the server is a single machine, running Windows server 2003 and IIS6 on a Xeon <3 ghz and a gigabit line.
Of course, I understand that the web server is especially tailored for this kind of activity,
but it still seems strange that a machine serving up to 2-300 sessions at a time
(spread unevenly on ~5 .Net 2.0 applications) through a remote network(aka. the internet ;-)
is so much faster at presenting the pages, compared to running the code locally...
Just something that's been on my mind for a while...
UPDATE
Thanks a lot for the answers! Just thought I'd add a few points to the above:
Have tried removing all obstacles surrounding my localhost; turned off the firewall and antivirus, stopped pouring milk into my computer case, killed any heavy processes etc.
This is not contained to just one project or app; it's something I've noticed and wondered about since I started working as a developer ( ~1 year )
Don't think inaccessible resources has any significance; when working locally I usually have all the project's assets(pictures, flash, etc.) locally
Can't really see any difference concering cache on or off.
Chose a random page from the project I'm currently working on, reloaded it completly a couple of times; locally I got it in about 4 seconds, compared to ~2 sec from the server.
This was using FF and Firebug; using Opera I kind of felt there was a smaller difference but that's just my gut...
So I guess that leaves (as you mentioned) harddrives and the database connection...
Just seems weird....
If you are using FireFox or Safari and you are on Windows Vista then you should disable IP version 6 since this messes with Vista in combination with WebDev and FireFox/Safari...
In FF type in about:config in the address bar, filter for "IPv6" and set enabled to FALSE!
This is a bug with IPv6 in Windows Vista and is a highly likely candidate for your troubles...
There are at least two reasons for this:
First, your local server is probably running the pages in debug mode with a debugger attached. This makes everything run slower
Second, each time you change your pages code or you restart your server all pages must be recompiled, and that takes some time.
On your production server the pages are compiled once and then the compiled version is served to all users, and you are probably not running in debug mode (I hope!).
Well...after upgrading my machine (Q9550#3.4ghz, 1TB >100mb/s search drive ) I see next to no difference even while having this computer do the works ( MS SQL server, IIS ) compared to the same page hosted at GoDaddy. When asking my inital question I had a somewhat lesser machine and compared it to my firms dedicated servers.
So the anwer to the question is basically :
They're not.
Thanks for all of your answers though!
There is no reason that the app shouldn't run fast locally in your described setup-- perhaps you have something else going on.
The first thing to look at would be what you are running on your dev box: Anti-virus or software firewalls can be a killer for these things, and you may want to test with that disabled.
You can also check if your site is trying to access unavailable content (unavailable urls) from your development machine. I've had this problem a couple of times before.
I'm surprised no-one has mentioned hard disks yet. The hard disk is often a typical bottleneck in a system, and desktop hard disks are often a lot slower than server (SCSI) disks. A desktop workstation could also have more processes running that are all using the disk at the same time, whereas server machines are more optimized to run the critical server processes only. But of course, it all depends on what exactly a machine is doing.
Are you actually running this through IIS7 or is it really running through Visual Studio's ASP.NET Development Server? If the latter, well... that right there is a huge reason. The ASP.NET Development Server is optimized to debug applications, not to run them quickly.
The other half the problem is that you didn't actually tell us the specs to your machine, only that it's "ok hardware," not usually a metric when it comes to computer. Vista does suck up some resources, both with its new display manager (for the Aero Glass desktop) and its tendency to pre-load commonly run applications into RAM.
It also sounds like you might be running the database server from your desktop as well, which sucks down more resources that the server machines won't, since they would most likely have (a) separate databases server(s).
Have you considered that it may be because of caching? i.e. pages on the production server are cached, and those on localhost are not cached.
I also agree with terjetyl it is possible that your localhost cannot find a linked file (eg javascript source file), your firewall could be blocking these....
IF there are things stored on the server that the application needs to access, this will considerably slow things down - yes, I've seen places where there was a production server which hosted the only database system available to the whole company, for both production and development.
Small million of things are in the game: faster network; better DB server running for a long time and having all queries already executed before; ... maybe is due to Vista :)

Resources