

Use a page caching plugin that writes HTML files to disk. I don’t do a lot with WordPress any more, but my preferred one was WP Super Cache. Then, you need to configure Nginx to serve pages directly from disk if they exist. By doing this, page loads don’t need to hit PHP and you effectively get the same performance as if it were a static site.
See how you go with just that, with no other changes. You shouldn’t need FastCGI caching. If you can get most page loads hitting static HTML files, you likely won’t need any other optimizations.
One issue you’ll hit is if there’s any highly dynamic content on the page, that’s generated on the server. You’ll need to use JavaScript to load any dynamic bits. Normal article editing is fine, as WordPress will automatically clear related caches on publish.
For the server, make sure it’s located near the region where the majority of your users are located. For 200k monthly hits, I doubt you’d need a machine as powerful as the Hetzner one you mentioned. What are you using currently?







In the context of Debian, “stable” means it doesn’t change often. Debian stable doesn’t have major version changes within a particular release.
Unstable has major changes all the time, hence the name.
I think testing is a good middle ground. Packages are migrated from unstable to testing after ~10 days of being in unstable, if no major bugs are found.