Saving-and-Replaying-from-file
You can save requests to file, and replay them later. While replaying it will preserve the original time differences between requests. If you apply percentage based limiting timing between requests will be reduced or increased appropriately: this approach opens possibilities like load testing, see below.
By default Gor writes files in chunks. This configurable using --output-file-append
option: the flushed chunk is appended to existence file or not. The default is false. By default, --output-file
flushes each chunk to a different path.
This makes parallel file processing easy. But if you want to disable this behavior, you can disable it by adding --output-file-append
option:
If you run gor multiple times, and it finds existing files, it will continue from last known index.
Chunk size
You can set chunk limits using --output-file-size-limit
and --output-file-queue-limit
options. The length of the chunk queue and the size of each chunk, respectively. The default values are 256 and 32mb, respectively. The suffixes “k” (KB), “m” (MB), and “g” (GB) can be used for output-file-size-limit
. If you want to have only size constraint, you can set --output-file-queue-limit
to 0, and vice versa.
Using date variables in file names
For example, you can tell to create new file each hour: --output-file /mnt/logs/requests-%Y-%m-%d-%H.log
It will create new file for each hour: requests-2016-06-01-12.log, requests-2016-06-01-13.log, ...
The time format used as part of the file name. The following characters are replaced with actual values when the file is created:
%Y
: year including the century (at least 4 digits)%m
: month of the year (01..12)%d
: Day of the month (01..31)%H
: Hour of the day, 24-hour clock (00..23)%M
: Minute of the hour (00..59)%S
: Second of the minute (00..60)
The default format is %Y%m%d%H
, which creates one file per hour.
GZIP compression
To read or write GZIP compressed files ensure that file extension ends with ".gz": --output-file log.gz
Replaying from multiple files
--input-file
accepts file pattern, for example: --input-file logs-2016-05-*
. GoReplay is smart enough keep original order of requests. It is achieved by reading all files in parallel, and sorting requests between multiple files by timestamp. It do not read all files in memory, but instead read them in a streaming maneer, on demand.
Buffered file output
Gor has memory buffer when it writes to file, and continuously flush changes to the file. Flushing to file happens if the buffer is filled, forced flush every 1 second, or if Gor is closed. You can change it using --output-file-flush-interval
option. It most cases it should not be touched.
File format
HTTP requests stored as it is, plain text: headers and bodies. Requests separated by \n🐵🙈🙉\n
line (using such sequence for uniqueness and fun). Before each request goes single line with meta information containing payload type (1 - request, 2 - response, 3 - replayed response), unique request ID (request and response have the same) and timestamp when request was made. An example of 2 requests:
Note that technically \r and \n symbols are invisible, and indicate new lines. I made them visible in example just to show how it looks on byte level.
Making it text friendly allows writing simple parsers and use console tools like grep
to do an analysis. You can even edit them manually, but be sure that your file editor does not change line endings.
Performance testing
Currently, this functionality supported only by input-file
and only when using percentage based limiter. Unlike default limiter for input-file
instead of dropping requests it will slowdown or speedup request emitting. Note that limiter is applied to input:
Use --stats --output-http-stats
to see latency stats.
Looping files for replaying indefinitely
You can loop the same set of files, so when the last one replays all the requests, it will not stop, and will start from first one again. Having the only small amount of requests you can do extensive performance testing. Pass --input-file-loop
to make it work.
You may also read about [[Capturing and replaying traffic]] and [[Rate limiting]]
Last updated