Slow Python Site, Available Entropy Stats

Infinite Glitch is running insanely slowly, like taking 20 seconds to load!

I went back to some early git commits to see if perhaps I could track down the source of a bottleneck.

Chris said it might be a corrupted drive or lack of available entropy.

I had to remind myself that entropy means the number of ways something can be arranged.

Somebody on SO had a recommendation for how to measure the entropy of an object in python.

One a Linode thread, someone had some commands to test a drive.

free -m

htop

w

df -h

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

That last dd command copies a disc or volume.

Let’s see if the drive is dying:

sudo apt install smartmontools

sudo su

lsblk

NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda    8:0    0 49.5G  0 disk /
sdb    8:16   0  516M  0 disk [SWAP]
# smartctl -a /dev/sda
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-5.1.17-x86_64-linode128] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               QEMU
Product:              QEMU HARDDISK
Revision:             2.5+
User Capacity:        53,150,220,288 bytes [53.1 GB]
Logical block size:   512 bytes
LU is thin provisioned, LBPRZ=0
Device type:          disk
Local Time is:        Tue Apr 21 16:11:56 2020 EDT
SMART support is:     Unavailable - device lacks SMART capability.

The Linode support folks offered a couple of additional commands to look into the issue:

iostat 1 10
              disk0               disk2               disk3       cpu    load average
    KB/t  tps  MB/s     KB/t  tps  MB/s     KB/t  tps  MB/s  us sy id   1m   5m   15m
    6.11  215  1.28   257.65    0  0.10    61.52    0  0.00   3  4 93  5.91 3.82 2.93
    4.77 3468 16.17     0.00    0  0.00     0.00    0  0.00   5 15 80  5.91 3.82 2.93
    4.74 1353  6.27     0.00    0  0.00     0.00    0  0.00   6 15 79  5.91 3.82 2.93
    4.60 1016  4.56     0.00    0  0.00     0.00    0  0.00   7 16 77  5.60 3.79 2.93
    4.43  897  3.89     0.00    0  0.00     0.00    0  0.00  14 21 65  5.60 3.79 2.93
    4.05  704  2.79     0.00    0  0.00     0.00    0  0.00   8 18 74  5.60 3.79 2.93
    4.24 3054 12.65     0.00    0  0.00     0.00    0  0.00   9 18 73  5.60 3.79 2.93
    4.19 1473  6.03     0.00    0  0.00     0.00    0  0.00  10 19 71  5.60 3.79 2.93
    4.08  684  2.73     0.00    0  0.00     0.00    0  0.00   8 18 74  5.15 3.73 2.91
    4.21  788  3.24     0.00    0  0.00     0.00    0  0.00   9 18 73  5.15 3.73 2.91

And this one:

for x in `seq 1 1 30`; do ps -eo state,pid,comm | grep "^[I<>AELVX]"; echo "-"; sleep 2; done
$ top -n 1 | head -15
Processes: 575 total, 3 running, 3 stuck, 569 sleeping, 3166 threads 
2020/04/22 11:07:45
Load Avg: 1.94, 2.29, 2.29 
CPU usage: 5.37% user, 14.74% sys, 79.88% idle 
SharedLibs: 363M resident, 73M data, 112M linkedit.
MemRegions: 316863 total, 11G resident, 244M private, 5418M shared.
PhysMem: 31G used (3743M wired), 573M unused.
VM: 3535G vsize, 1377M framework vsize, 64982(0) swapins, 116359(0) swapouts.
Networks: packets: 231632817/96G in, 327107123/172G out.
Disks: 74564630/449G read, 14517642/112G written.

PID    COMMAND %CPU TIME     #TH #WQ #PORTS MEM   PURG CMPRS PGRP  PPID STATE    BOOSTS %CPU_ME %CPU_OTHRS UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH CSW PAGEINS IDLEW POWER INSTRS CYCLES USER       #MREGS RPRVT VPRVT VSIZE KPRVT KSHRD
92385  head    0.0  00:00.00 1   0   10+    304K+ 0B   0B    92384 6197 sleeping *0[1+] 0.00000 0.00000    501 377+   83+ 23+     11+     110+   51+     11+ 0       0     0.0   0      0      mikekilmer N/A    N/A   N/A   N/A   N/A   N/A  
Processes: 575 total, 4 running, 3 stuck, 568 sleeping, 3134 threads 
2020/04/22 11:07:46

In the end, the solution ended up being having Linode migrate the instance to a new Linode instance. Weird.

Leave a Reply

Your email address will not be published. Required fields are marked *