Replace Unicorn with Puma

… and drop the single_process_mode. See the included Changelog entry
for full details on what this change means.
This commit is contained in:
Dennis Schubert 2022-09-09 04:33:37 +02:00
parent bb80ca3394
commit 97cfc80a1f
No known key found for this signature in database
GPG key ID: 5A0304BEA7966D7E
16 changed files with 158 additions and 217 deletions

View file

@ -16,6 +16,20 @@ After [a discussion with our community on Discourse](https://discourse.diasporaf
Although the chat was never enabled per default and was marked as experimental, some production pods did set up the integration and offered an XMPP service to their users. After this release, diaspora\* will no longer contain a chat applet, so users will no longer be able to use the webchat inside diaspora\*. The existing module that is used to enable users to authenticate to Prosody using their diaspora\* credentials will continue to work, but contact list synchronization might not work without further changes to the Prosody module, which is developed independently from this project.
## Changes around the appserver and related configuration
With this release, we switched from `unicorn` to `puma` to run our applications. For podmins running the default setup, this should significantly reduce memory usage, with similar or even better frontend performance! However, as great as this change is, some configuration changes are required.
- The `single_process_mode` and `embed_sidekiq_worker` configurations have been removed. This mode was never truly a "single-process" mode, as it just spawned the Background Workers inside the runserver. If you're using `script/server` to start your pod, this change does not impact you, but if you're running diaspora\* using other means, and you relied on this "single"-process mode, please ensure that Sidekiq workers get started.
- The format of the `listen` configuration has changed. If you have not set that field in your configuration, you can skip this. Otherwise, make sure to adjust your configuration accordingly:
- Listening to Unix sockets with a relative path has changed from `unix:tmp/diaspora.sock` into `unix://tmp/diaspora.sock`.
- Listening to Unix sockets with an absolute path has changed from `unix:/run/diaspora/diaspora.sock` to `unix:///run/diaspora/diaspora.sock`.
- Listening to a local port has changed from `127.0.0.1:3000` to `tcp://127.0.0.1:3000`.
- The `PORT` environment variable and the `-p` parameter to `script/server` have been removed. If you used that to run diaspora\* on a non-standard port, please use the `listen` configuration.
- The `unicorn_worker` configuration has been dropped. With Puma, there should not be a need to increase the number of workers above a single worker in any pod of any size.
- The `unicorn_timeout` configuration has been renamed to `web_timeout`.
- **If you don't run your pod with `script/server`**, you have to update your setup. If you previously called `bin/bundle exec unicorn -c config/unicorn.rb` to run diaspora\*, you now have to run `bin/puma -C config/puma.rb`! Please update your systemd-Units or similar accordingly.
## Yarn for frontend dependencies
We use yarn to install the frontend dependencies now, so you need to have that installed. See here for how to install it: https://yarnpkg.com/en/docs/install

View file

@ -10,8 +10,7 @@ gem "responders", "3.0.1"
# Appserver
gem "unicorn", "6.1.0", require: false
gem "unicorn-worker-killer", "0.4.5"
gem "puma", "5.6.5", require: false
# Federation

View file

@ -310,8 +310,6 @@ GEM
fuubar (2.5.1)
rspec-core (~> 3.0)
ruby-progressbar (~> 1.4)
get_process_mem (0.2.7)
ffi (~> 1.0)
gitlab (4.18.0)
httparty (~> 0.18)
terminal-table (>= 1.5.1)
@ -398,7 +396,6 @@ GEM
jsonpath (1.1.2)
multi_json
jwt (2.4.1)
kgio (2.11.4)
kostya-sigar (2.0.10)
leaflet-rails (1.7.0)
rails (>= 4.2.0)
@ -520,6 +517,8 @@ GEM
byebug (~> 11.0)
pry (~> 0.10)
public_suffix (4.0.7)
puma (5.6.5)
nio4r (~> 2.0)
raabro (1.4.0)
racc (1.6.0)
rack (2.2.4)
@ -581,7 +580,6 @@ GEM
rake (>= 12.2)
thor (~> 1.0)
rainbow (3.1.1)
raindrops (0.20.0)
rake (12.3.3)
rash_alt (0.4.12)
hashie (>= 3.4)
@ -735,12 +733,6 @@ GEM
unf_ext
unf_ext (0.0.8.2)
unicode-display_width (1.8.0)
unicorn (6.1.0)
kgio (~> 2.6)
raindrops (~> 0.7)
unicorn-worker-killer (0.4.5)
get_process_mem (~> 0)
unicorn (>= 4, < 7)
uuid (2.3.9)
macaddr (~> 1.0)
valid (1.2.0)
@ -848,6 +840,7 @@ DEPENDENCIES
pronto-scss (= 0.11.0)
pry
pry-byebug
puma (= 5.6.5)
rack-cors (= 1.1.1)
rack-google-analytics (= 1.2.0)
rack-piwik (= 0.3.0)
@ -885,8 +878,6 @@ DEPENDENCIES
twitter (= 7.0.0)
twitter-text (= 3.1.0)
typhoeus (= 1.4.0)
unicorn (= 6.1.0)
unicorn-worker-killer (= 0.4.5)
uuid (= 2.3.9)
versionist (= 2.0.1)
webmock (= 3.14.0)

View file

@ -1,2 +1,2 @@
web: bin/bundle exec unicorn -c config/unicorn.rb -p $PORT
web: bin/puma -C config/puma.rb
sidekiq: bin/bundle exec sidekiq

View file

@ -27,12 +27,17 @@ module Workers
end
def currently_running_archive_jobs
return 0 if AppConfig.environment.single_process_mode?
Sidekiq::Workers.new.count do |process_id, thread_id, work|
!(Process.pid.to_s == process_id.split(":")[1] && Thread.current.object_id.to_s(36) == thread_id) &&
ArchiveBase.subclasses.map(&:to_s).include?(work["payload"]["class"])
end
rescue Redis::CannotConnectError
# If code gets to this point and there is no Redis conenction, we're
# running in a Test environment and have not mocked Sidekiq::Workers, so
# we're not testing the concurrency-limiting behavior.
# There is no way a production pod will run into this code, as diaspora*
# refuses to start without redis.
0
end
end
end

27
bin/puma Executable file
View file

@ -0,0 +1,27 @@
#!/usr/bin/env ruby
# frozen_string_literal: true
#
# This file was generated by Bundler.
#
# The application 'puma' is installed as part of a gem, and
# this file is here to facilitate running it.
#
ENV["BUNDLE_GEMFILE"] ||= File.expand_path("../Gemfile", __dir__)
bundle_binstub = File.expand_path("bundle", __dir__)
if File.file?(bundle_binstub)
if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/
load(bundle_binstub)
else
abort("Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.
Replace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.")
end
end
require "rubygems"
require "bundler/setup"
load Gem.bin_path("puma", "puma")

27
bin/pumactl Executable file
View file

@ -0,0 +1,27 @@
#!/usr/bin/env ruby
# frozen_string_literal: true
#
# This file was generated by Bundler.
#
# The application 'pumactl' is installed as part of a gem, and
# this file is here to facilitate running it.
#
ENV["BUNDLE_GEMFILE"] ||= File.expand_path("../Gemfile", __dir__)
bundle_binstub = File.expand_path("bundle", __dir__)
if File.file?(bundle_binstub)
if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/
load(bundle_binstub)
else
abort("Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.
Replace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.")
end
end
require "rubygems"
require "bundler/setup"
load Gem.bin_path("puma", "pumactl")

View file

@ -8,14 +8,6 @@
require_relative "config/environment"
# Kill unicorn workers really aggressively (at 300mb)
if defined?(Unicorn)
require "unicorn/worker_killer"
oom_min = (280) * (1024**2)
oom_max = (300) * (1024**2)
# Max memory size (RSS) per worker
use Unicorn::WorkerKiller::Oom, oom_min, oom_max
end
use Rack::Deflater
run Rails.application

View file

@ -11,7 +11,6 @@ defaults:
certificate_authorities:
redis:
require_ssl: true
single_process_mode: false
sidekiq:
concurrency: 5
retry: 10
@ -40,14 +39,12 @@ defaults:
sql: false
federation: false
server:
listen: '0.0.0.0:3000'
listen: "tcp://127.0.0.1:3000"
rails_environment: 'development'
pid: "tmp/pids/web.pid"
stderr_log:
stdout_log:
unicorn_worker: 2
unicorn_timeout: 90
embed_sidekiq_worker: false
web_timeout: 90
sidekiq_workers: 1
map:
mapbox:
@ -179,23 +176,19 @@ development:
environment:
assets:
serve: true
single_process_mode: true
require_ssl: false
logging:
debug:
sql: true
server:
unicorn_worker: 1
settings:
autofollow_on_join: false
autofollow_on_join_user: ''
production:
server:
listen: 'unix:tmp/diaspora.sock'
listen: 'unix://tmp/diaspora.sock'
test:
environment:
url: 'http://localhost:9887/'
single_process_mode: true
require_ssl: false
assets:
serve: true

View file

@ -54,14 +54,6 @@
## Do not change this default unless you are sure!
#require_ssl = true
## Single-process mode (default=false).
## If set to true, Diaspora will work with just the appserver (Unicorn by
## default) running. However, this makes it quite slow as intensive jobs
## must be run all the time inside the request cycle. We strongly
## recommended you leave this disabled for production setups.
## Set to true to enable.
#single_process_mode = false
## Set redirect URL for an external image host (Amazon S3 or other).
## If hosting images for your pod on an external server (even your own),
## add its URL here. All requests made to images under /uploads/images
@ -162,12 +154,12 @@
## Settings affecting how ./script/server behaves.
[configuration.server]
## Where the appserver should listen to (default="unix:tmp/diaspora.sock")
#listen = "unix:tmp/diaspora.sock"
#listen = "unix:/run/diaspora/diaspora.sock"
#listen = "127.0.0.1:3000"
## Where the appserver should listen to (default="unix://tmp/diaspora.sock")
#listen = "unix://tmp/diaspora.sock"
#listen = "unix:///run/diaspora/diaspora.sock"
#listen = "tcp://127.0.0.1:3000"
## Set the path for the PID file of the unicorn master process (default=tmp/pids/web.pid)
## Set the path for the PID file of the web master process (default=tmp/pids/web.pid)
#pid = "tmp/pids/web.pid"
## Rails environment (default="development").
@ -175,23 +167,15 @@
## Change this to "production" if you wish to run a production environment.
#rails_environment = "production"
## Write unicorn stderr and stdout log.
#stderr_log = "log/unicorn-stderr.log"
#stdout_log = "log/unicorn-stdout.log"
## Number of Unicorn worker processes (default=2).
## Increase this if you have many users.
#unicorn_worker = 2
## Write web stderr and stdout log.
#stderr_log = "log/web-stderr.log"
#stdout_log = "log/web-stdout.log"
## Number of seconds before a request is aborted (default=90).
## Increase if you get empty responses, or if large image uploads fail.
## Decrease if you're under heavy load and don't care if some
## requests fail.
#unicorn_timeout = 90
## Embed a Sidekiq worker inside the unicorn process (default=false).
## Useful for minimal Heroku setups.
#embed_sidekiq_worker = false
#web_timeout = 90
## Number of Sidekiq worker processes (default=1).
## In most cases it is better to

View file

@ -14,39 +14,30 @@ Eye.application("diaspora") do
stderr "log/eye_processes_stderr.log"
process :web do
unicorn_command = "bin/bundle exec unicorn -c config/unicorn.rb"
web_command = "bin/puma -C config/puma.rb"
if rails_env == "production"
start_command "#{unicorn_command} -D"
daemonize false
restart_command "kill -USR2 {PID}"
restart_grace 10.seconds
else
start_command unicorn_command
daemonize true
end
start_command web_command
daemonize true
restart_command "kill -USR2 {PID}"
restart_grace 10.seconds
pid_file AppConfig.server.pid.get
stop_signals [:TERM, 10.seconds]
env "PORT" => ENV["PORT"]
monitor_children do
stop_command "kill -QUIT {PID}"
end
end
group :sidekiq do
with_condition(!AppConfig.environment.single_process_mode?) do
AppConfig.server.sidekiq_workers.to_i.times do |i|
i += 1
AppConfig.server.sidekiq_workers.to_i.times do |i|
i += 1
process "sidekiq#{i}" do
start_command "bin/bundle exec sidekiq"
daemonize true
pid_file "tmp/pids/sidekiq#{i}.pid"
stop_signals [:USR1, 0, :TERM, 10.seconds, :KILL]
end
process "sidekiq#{i}" do
start_command "bin/bundle exec sidekiq"
daemonize true
pid_file "tmp/pids/sidekiq#{i}.pid"
stop_signals [:USR1, 0, :TERM, 10.seconds, :KILL]
end
end
end

View file

@ -3,16 +3,6 @@
require "sidekiq_middlewares"
require "sidekiq/middleware/i18n"
# Single process-mode
if AppConfig.environment.single_process_mode? && !Rails.env.test?
if Rails.env.production?
warn "WARNING: You are running Diaspora in production without Sidekiq"
warn " workers turned on. Please set single_process_mode to false in"
warn " config/diaspora.toml."
end
require "sidekiq/testing/inline"
end
Sidekiq.configure_server do |config|
config.redis = AppConfig.get_redis_options

47
config/puma.rb Normal file
View file

@ -0,0 +1,47 @@
# frozen_string_literal: true
require_relative "load_config"
pidfile AppConfig.server.pid.get
bind AppConfig.server.listen.get
worker_timeout AppConfig.server.web_timeout.to_i
if AppConfig.server.stdout_log? || AppConfig.server.stderr_log?
stdout_redirect AppConfig.server.stdout_log? ? AppConfig.server.stdout_log.get : "/dev/null",
AppConfig.server.stderr_log? ? AppConfig.server.stderr_log.get : "/dev/null"
end
# In general, running Puma in cluster-mode is one of those very rare setups
# that's only relevant in *huge* scale. However, starting 1 worker runs Puma in
# cluster mode, with a single worker. This means you get to pay all the memory
# overhead of spawning in "cluster mode", but you don't get any performance
# benefits. This makes no sense. Setting "workers = 0" explicitly turns off
# cluster mode.
#
# For more details and further references, see
# https://github.com/puma/puma/commit/81d26e91b777ab120e8f52d45385f0e018438ba4
workers 0
preload_app!
before_fork do
# we're preloading app in production, so force-reconenct the DB
ActiveRecord::Base.connection_pool.disconnect!
# drop the Redis connection
Sidekiq.redis {|redis| redis.client.disconnect }
end
on_worker_boot do
# reopen logfiles to obtain a new file descriptor
Logging.reopen
ActiveSupport.on_load(:active_record) do
# we're preloading app in production, so reconnect to DB
ActiveRecord::Base.establish_connection
end
# We don't generate uuids in the frontend, but let's be on the safe side
UUID.generator.next_sequence
end

View file

@ -1,48 +0,0 @@
# frozen_string_literal: true
require_relative "load_config"
port = ENV["PORT"]
port = port && !port.empty? ? port.to_i : nil
listen port || AppConfig.server.listen.get unless RACKUP[:set_listener]
pid AppConfig.server.pid.get
worker_processes AppConfig.server.unicorn_worker.to_i
timeout AppConfig.server.unicorn_timeout.to_i
stderr_path AppConfig.server.stderr_log.get if AppConfig.server.stderr_log?
stdout_path AppConfig.server.stdout_log.get if AppConfig.server.stdout_log?
preload_app true
@sidekiq_pid = nil
before_fork do |_server, _worker|
ActiveRecord::Base.connection.disconnect! # preloading app in master, so reconnect to DB
# disconnect redis if in use
Sidekiq.redis(&:close) unless AppConfig.environment.single_process_mode?
@sidekiq_pid ||= spawn("bin/bundle exec sidekiq") if AppConfig.server.embed_sidekiq_worker?
end
after_fork do |server, worker|
Logging.reopen # reopen logfiles to obtain a new file descriptor
ActiveRecord::Base.establish_connection # preloading app in master, so reconnect to DB
# We don't generate uuids in the frontend, but let's be on the safe side
UUID.generator.next_sequence
# Check for an old master process from a graceful restart
old_pid = "#{AppConfig.server.pid.get}.oldbin"
if File.exist?(old_pid) && server.pid != old_pid
begin
# Remove a worker from the old master when we fork a new one (TTOU)
# Except for the last worker forked by this server, which kills the old master (QUIT)
signal = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(signal, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end

View file

@ -22,23 +22,6 @@ on_failure()
fi
}
# Check if already running/port blocked
chk_service()
{
port=${1:?Missing port}
case $os in
*[Bb][Ss][Dd]*|Darwin)
## checks ipv[46]
netstat -anL | awk '{print $2}' | grep "\.$1$"
;;
*)
# Is someone listening on the ports already? (ipv4 only test ?)
netstat -nl | grep '[^:]:'$port'[ \t]'
;;
esac
}
# ensure right directory
realpath=$( ruby -e "puts File.expand_path(\"$0\")")
cd $(dirname $realpath)/..
@ -106,8 +89,6 @@ fi
os=$(uname -s)
vars=$(bin/bundle exec ruby ./script/get_config.rb \
single_process_mode=environment.single_process_mode? \
embed_sidekiq_worker=server.embed_sidekiq_worker \
workers=server.sidekiq_workers \
redis_url=environment.redis \
| grep -vE "is not writable|as your home directory temporarily"
@ -115,24 +96,6 @@ vars=$(bin/bundle exec ruby ./script/get_config.rb \
on_failure "Couldn't parse $CONFIG_FILE!"
eval "$vars"
args="$@"
for arg in $(echo $args | awk '{ for (i = 1; i <= NF; i++) print $i}')
do
[ "$prev_arg" = '-p' ] && PORT="$arg"
prev_arg="$arg"
done
if [ -n "$PORT" ]
then
export PORT
services=$(chk_service $PORT)
if [ -n "$services" ]
then
fatal "Port $PORT is already in use.\n\t$services"
fi
fi
# Force AGPL
if [ -w "public" -a ! -e "public/source.tar.gz" ]
then
@ -161,16 +124,13 @@ application, run:
fi
# Check if redis is running
if [ "$single_process_mode" = "false" ]
if [ -n "$redis_url" ]
then
if [ -n "$redis_url" ]
then
redis_param="url: '$redis_url'"
fi
if [ "$(bin/bundle exec ruby -e "require 'redis'; puts Redis.new($redis_param).ping" 2> /dev/null | grep -vE "is not writable|as your home directory temporarily" )" != "PONG" ]
then
fatal "Can't connect to redis. Please check if it's running and if environment.redis is configured correctly in $CONFIG_FILE."
fi
redis_param="url: '$redis_url'"
fi
if [ "$(bin/bundle exec ruby -e "require 'redis'; puts Redis.new($redis_param).ping" 2> /dev/null | grep -vE "is not writable|as your home directory temporarily" )" != "PONG" ]
then
fatal "Can't connect to redis. Please check if it's running and if environment.redis is configured correctly in $CONFIG_FILE."
fi
# Check for old curl versions (see https://github.com/diaspora/diaspora/issues/4202)
@ -201,22 +161,5 @@ if [ -n "${ldconfig}" ]; then
fi
# Start Diaspora
printf "Starting Diaspora in $RAILS_ENV mode "
if [ -n "$PORT" ]
then
printf "on port $PORT "
fi
if [ "$embed_sidekiq_worker" = "true" ]
then
echo "with a Sidekiq worker embedded into Unicorn."
workers=0
elif [ "$single_process_mode" = "true" ]
then
echo "with job processing inside the request cycle."
workers=0
else
echo "with $workers Sidekiq worker(s)."
fi
echo ""
printf "Starting Diaspora in $RAILS_ENV mode with $workers Sidekiq worker(s)."
exec bin/bundle exec loader_eye --stop_all -c config/eye.rb

View file

@ -25,14 +25,9 @@ describe Workers::ExportUser do
context "concurrency" do
before do
AppConfig.environment.single_process_mode = false
AppConfig.settings.archive_jobs_concurrency = 1
end
after :all do
AppConfig.environment.single_process_mode = true
end
let(:pid) { "#{Socket.gethostname}:#{Process.pid}:#{SecureRandom.hex(6)}" }
it "schedules a job for later when already another parallel export job is running" do
@ -76,14 +71,5 @@ describe Workers::ExportUser do
Workers::ExportUser.new.perform(alice.id)
end
it "runs the export when diaspora is in single process mode" do
AppConfig.environment.single_process_mode = true
expect(Sidekiq::Workers).not_to receive(:new)
expect(Workers::ExportUser).not_to receive(:perform_in).with(kind_of(Integer), alice.id)
expect(alice).to receive(:perform_export!)
Workers::ExportUser.new.perform(alice.id)
end
end
end