Sudo Webapp Ln S Wp-content/uploads Efs Wordpress

Many organizations use content direction systems (CMS) like WordPress using a unmarried node installation, but could do good from a multi-node installation, as it is a best do that provides benefits in terms of functioning and availability. The reliability pillar of the AWS Well-Architected Framework recommends the post-obit blueprint principle: "Scale horizontally to increment amass system availability: Supplant one large resource with multiple small resources to reduce the impact of a single failure on the overall organisation. Distribute requests across multiple, smaller resources to ensure that they don't share a common point of failure." This blog post discusses how Amazon Elastic File System (Amazon EFS) tin be used as a shared content store for highly available WordPress deployments, and presents optimization tips to improve your site's performance.

One approach for running a multi-node WordPress site is to store files in a primal location and download this data during the bootstrap process. While this can work, this option makes it more hard to ensure that content stays synchronized as your web site evolves. Using a shared file system like Amazon EFS allows multiple nodes to have admission to WordPress files at the aforementioned fourth dimension. This can significantly simplify the processes of scaling horizontally and updating your web site.

Understanding page load time

Amazon EFS provides a simple, scalable, fully managed rubberband NFS file organisation for employ with AWS Cloud services and on-bounds resources. It is built to calibration on need to petabytes without disrupting applications, growing and shrinking automatically every bit you add and remove files, eliminating the need to provision and manage capacity to suit growth. Amazon EFS is a regional service, storing data within and across multiple Availability Zones for high availability and durability.

Every bit for whatever networked file system, there is an overhead associated with the network communications between the client and the server. This overhead is proportionally larger when operating on small files from single threaded applications. Multi-threaded I/O and I/O on larger files tin can often be pipelined, allowing any network latencies to be amortized over a larger number of operations.

Say, for example, that a PHP website needs access to 100 small files to generate a home folio; this could be PHP files, includes, modules, etc. If yous innovate a modest, low single-digit millisecond latency when loading pages to the PHP parser, then your users will experience an additional filibuster of a few hundred milliseconds when accessing your website (100 files X few milliseconds delay). This is important equally user tolerance for a loading webpage is not very loftier.

This diagram shows the different steps that must exist taken for a folio to load. Each step introduces additional latency. Nosotros are concentrating on how to optimize your WordPress website to run on Amazon EFS and serve the 'offset byte' as soon as possible.

This diagram shows the different steps that must be taken for a page to load. Each step introduces additional latency

Latency impact

PHP is an interpreted language. The interpreter must read, interpret, and compile code of your application for each request made to information technology. For a simple <?php echo "Hello world" ?> the interpreter needs access to a unmarried file, but if you are running a CMS like WordPress it may need to read 100s of files before it can generate a page. To give you an idea, the PHP interpreter reads 227 files in sequence earlier generating a ''Welcome" page from a newly installed WordPress.

To demonstrate the operation a web server experiences when retrieving files directly in a serial fashion over the network, I created a fresh out-of-the box "Welcome" page from a newly installed WordPress (v5.4) website. The website ran on a t2.medium Amazon EC2 instance. The WordPress directory was stored in an Amazon EFS file system I created using all the defaults (Full general Purpose Performance mode and Bursting Throughput way). In addition, the directory was mounted using the Amazon EFS mount helper, which by default uses the recommended mount options. Once the setup was complete, I ran several tests. The first exam loads the default "Welcome" page and tests #2 to #5 load static files of various sizes.

The fourth dimension-to-first-byte (TTFB) metric is useful for measuring the results of each test. When someone opens a website, the browser asks the server for data; this is known as a 'Go' request. TTFB is the time it takes for a browser to receive the beginning byte from the web server. Ideally, we would like the TTFB to be equally small equally possible. I ran each test on the aforementioned Amazon EC2 example that was hosting the WordPress install so that in that location wouldn't exist any network latency interfering with the results.

I used the Linux curl command to run these tests. At the terminate, I recorded 250 samples of the 'time_starttransfer' for each exam. This is a sample script that I used to capture the TTFB for different files under different conditions.

This is a sample of the bash script:

          #!/bin/bash for i in {1..250} do      ringlet -o /dev/naught \     -south \     -westward "%{time_starttransfer}\n" \     http://127.0.0.1/wordpress/ >> /tmp/wordpress-efs-ttfb.txt done        

These are the exam results:

Examination

GET Operation

Bytes received

Files read

Average TTFB

one

wordpress/ three KB 227 759 ms

two

hullo.txt 12 B one 3 ms
3 small-file one MB one

v ms

4 medium-file ten MB 1

v ms

v big-file 100 MB ane 6 ms

Latency impact test results

The latency is relatively consistent regardless of the file size for the static files. The dynamically generated page takes longer to load. This is because PHP is fetching files sequentially accumulating latency before the page is served to the browser.

OPcache to the rescue

Zend OPcache improves PHP performance by storing precompiled script bytecode in shared retentiveness, removing the need for PHP to load and parse scripts on each request. This extension is arranged with PHP 5.5.0 and afterward, and is available in PECL for PHP versions 5.2, 5.iii and v.4.

I ran the same "GET wordpress/" test, but this time with OPCache enabled. I did so to determine how much latency tin be reduced by not having to read from deejay and not having to compile the code with every request.

Yous can utilise the setting opcache.revalidate_freq with opcache.validate_timestamps=1 within the OPcache configuration to determine how ofttimes to expire PHP bytecode, forcing a new roundtrip to Amazon EFS. In this example, I am setting a revalidation frequency of 15 minutes (900 seconds). Recollect that with OPCache and as with any caching organisation you are exchanging speed for instant visibility of files bachelor in the shared file organization.

          ; Enable Zend OPcache extension module zend_extension=opcache ; Determines if Zend OPCache is enabled opcache.enable=1 ; The OPcache shared memory storage size. opcache.memory_consumption=128 ; The amount of memory for interned strings in Mbytes. opcache.interned_strings_buffer=8 ; The maximum number of keys (scripts) in the OPcache hash table. opcache.max_accelerated_files=4000 ; The location of the OPcache blacklist file (wildcards immune). opcache.blacklist_filename=/etc/php.d/opcache*.blacklist ; When disabled, yous must reset the OPcache manually or restart the ; webserver for changes to the filesystem to take effect. opcache.validate_timestamps=i ; How often (in seconds) to check file timestamps for changes to the shared ; memory storage resource allotment. ("1" ways validate in one case per second, but only ; in one case per request. "0" means always validate) opcache.revalidate_freq=900        

These are the test results:

Test

GET performance

Bytes received

Files read

Boilerplate TTFB

one

wordpress/

(without OPCache)

3 Kb

227

759 ms

2

wordpress/

(with OPCache)

3 Kb

227

22 ms

Improvement:

35X

This is the phpinfo() function showing OPCache metrics. Notation how it has now buried all the 227 scripts that WordPress requires to display the homepage into retentivity.

OPCache metrics. Note how it has now cached all the 227 scripts that WordPress requires to display the homepage into memory.

There is a 35X improvement when using OPCache with files stored on Amazon EFS (from 759 ms to 22 ms).

There is a 35X improvement when using OPCache with files stored on Amazon EFS (from 759 ms to 22 ms).

Static content caching

PHP renders dynamic pages while the web server facilitates access to them via HTTP(due south). A spider web server too provides admission to static files like images, CSS, JavaScript, etc. These files tin can exist served direct from Amazon EFS, and the user will experience a low single-digit millisecond latency. For web sites that have 100s (or even more than) of static files that are loaded sequentially, the sum of these latencies can increase page load times.

A solution to this problem is to cache static files somewhere, such as a Content Delivery Network (CDN), like Amazon CloudFront, or local storage. If you are using Apache you can wait at the 'mod_cache_disk' module, it implements a disk-based storage manager for mod_cache. The way it works is that the headers and bodies of cached responses are stored separately on a location that you specify. This configuration avoids a network round trip to the shared file organisation when requests are served from the cache.

For demonstration purposes, I've prepared a configuration file that configures Apache to cache non-PHP files and revalidate them fifteen minutes afterward the first access occurred. The files are stored in /var/cache/httpd/proxy this location is using local disks. When a new request comes, Apache volition first check if the file has been cached. If the file hasn't expired, Apache retrieves it from this identify, otherwise it fetches it from the wwwroot folder that sits on elevation of Amazon EFS. Apache cache can exist easily flushed or kept within a size limit by using the htcacheclean utility.

          # Directory on the deejay to contain cached files  CacheRoot "/var/enshroud/httpd/proxy" # Enshroud all CacheEnable disk "/" # Enable cache and ready 15-infinitesimal caching as the default ExpiresActive On ExpiresDefault "access plus 15 minutes" # Force no caching for PHP files <FilesMatch "\.(php)$">     ExpiresActive Off </FilesMatch>                  

In one case the server is configured, I performed the 'hi.txt' test to compare results ever xv minutes.

Test

GET operation

Bytes received

Files read

Boilerplate TTFB

1

GET how-do-you-do.txt

(without mod_disk_cache)

12 B

1

iii ms

2

Become hullo.txt

(with mod_disk_cache)

12 B

1

0.6 ms

Improvement:

5X

After caching is enabled, it makes no difference if the file lives on local disk or Amazon EFS since the file is served from local disk after the first request

After caching is enabled, there is a 5x improvement when serving static avails stored on Amazon EFS.

1 last annotation, these settings may exist enough for many WordPress installations simply in some cases, you have to optimize further. For example, some plugins tin be configured to write logs. Ideally you lot would desire these logs files to exist written in local deejay in order to avoid network round trips to the shared file system potentially every time a line is added to the log.

Conclusion

In this blog post, I discussed how the AWS Well-Architected Framework recommends scaling out workloads horizontally. Furthermore, I covered how Amazon EFS is a practiced pick as the network shared file system for WordPress workloads. This is because Amazon EFS allows multiple nodes to have access to WordPress files at the aforementioned fourth dimension, simplifying scaling and deployment. I hope that my demonstration of using caching techniques to optimize performance of static files by 5x and dynamic files by 35X proves valuable to anyone reading.

Thanks for reading, please get out any comments or questions you may accept in the comments section.

Additional resource

  • Tutorial: Hosting a WordPress blog with Amazon Linux
  • WordPress: Best practices on AWS
  • How to advance your WordPress site with Amazon CloudFront

keebove1943.blogspot.com

Source: https://aws.amazon.com/blogs/storage/optimizing-wordpress-performance-with-amazon-efs/

0 Response to "Sudo Webapp Ln S Wp-content/uploads Efs Wordpress"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel