We have the solution, with our network monitoring and analytics apps available on the Apple Store:
https://apps.apple.com/gb/app/my-broadband-stats/id1551544602
After a quick setup period, our app will collect statistics from your router, and provide you with a set of rich intuitive graphs that tell you everything you need to know about what’s using your home network.
We are currently offering a free trial to our already value for money subscriptions to let you see the full potential at no cost to yourself. So why not give it a go today.
]]>This certainly used to be known as “Dog-Fooding” in more than one big IT setup. Still I’m quite happy with how this site runs, and hopefully so will you.
Although this specific to WordPress, a lot of the technology used is applicable to web hosting in general. I know how to squeeze the most out of your VPS to run Magento or just about anything else you like.
This site is hosted on an Amazon AWS t2.micro Instance, so no-one can accuse me of over specifying the hardware. If you haven’t used AWS before then you will qualify for the AWS free usage tier and I’d be delighted to help you migrate over to it.
Enough talk, here are some detailed measurements of the performance of this site.
]]>No problem, I’ll just sling up the excellent MRTG and find out I thought.
Oh no, it’s not that easy. I have an FTTC service provided by TalkTalk. The VDSL modem/router is a “Super Router” also known as the Huawei HG633. Running firmware v1.15t it has neither SNMP nor telnet/SSH or any other kind of CLI access. Bit of a dead end really. Still not to worry, I only use the HG633 to terminate the VDSL, it has an Ethernet uplink to an Apple AirPort Extreme that provides Wi-Fi for the house and a couple of gigabit connected wired devices (thanks TalkTalk for providing me with an 80/20 Mbps WAN product and only 100Mbps LAN side). Apple however have also removed SNMP capability from the AirPort range. *GRR*. Now the obvious solution is to get a proper modem/router/access point but these things are sent to challenge us. The HG633 has a tolerable web admin interface, which does expose some statistics, so we can surely yank those out with a bit of patience.
Turns our it is all JavaScript based in the HG633, but no worries, the excellent PhantomJS to the rescue. Lurking on the home LAN is a Raspberry PI Model 3, which proves to be more than up to the task of driving this headless JavaScript engine. After I little bit of tinkering I was able to generate a PhantomJS script which would login to the router, navigate to the appropriate page, and then dump the DOM out. Judicious use of text parsing results in getting the required information out of the admin gui, and which point it’s trivial to feed it to MRTG.
The results can be seen at http://mattfoster.noip.me/mrtg/
The code is ugly, doesn’t really cope with error conditions all that well, and is heavily dependent on some of the DOM structure in the router’s management page which will doubtless get screwed the next time TalkTalk pushes down a firmware update. Still perhaps the next firmware update will re-enable the CLI.
When there is a will, there is a way even if it is a slightly stupid one, which certainly fails to deal with asynchronous requests properly or even work all the time.
I hesitate to even publish the code, but as it was an annoying enough problem to “solve” the PhantomJS script is available router.js.txt And the horrible bash script called by MRTG mrtg-router.sh.txt
Since the Huawei HG633 was updated to firmware 1.18t the scripts broke (no surprise really, given the lack of API and HTML scraping. The updated JS script is available router-1.18t.js.txt now.
]]>Of course, we still have to keep an eye on our data transfer costs. There are two possible candidates for backing up our Linux Server/VPS to S3 that I’ve seen and used in the past, either: s3cmd or s3fs
S3FS certainly feels nice, and we can rsync to it in the normal way, but (and it is potentially a huge but – no pun intended) AWS S3 data charges are not just for storage, but also bandwidth transferred, and perhaps critically the number of requests made to the S3 API. I freely confess to having doing zero measurement on the subject, but it just feels instinctive that a FUSE filesystem implementation is going to make way more API calls than the python scripts that call the API directly that are s3cmd.
So using the rsync like logic you might consider doing something like:
cd /var/www/ s3cmd sync -r vhosts --delete-removed s3://$BUCKET/current/vhosts/
There is a small snag however to this approach. s3cmd keeps the directory structure in memory to help it with the rsync logic. This is fine if you are on real tin, with memory to spare. But on a VPS, especially an OpenVZ based one where there is no such thing as swap, this can be a real show stopper for large directory structures as the hundreds of MB of RAM required just are not available. Time for our old friend the OOM killer to rear it’s head ?
Recursion of some form would be the elegant answer here. However elegance is for those with time for it, and the following seems to work very effectively with minimal RAM consumption:
cd /var/www for i in `find . -type d -links 2 | sort | sed -e 's/\.\///g'` do s3cmd sync -r $i/ --delete-removed s3://$BUCKET/current/vhosts/$i/ done
The find command looks for directories which only contain two directories (. and ..), that is to say they are the end nodes of a directory tree. And then we back them up, one by one.
Simples.
]]>My own WP site here is protected with Google Authenticator, and there is no excuse for not doing the same on yours. Just grab the awesome WP Google Authenticator plugin and you will be good to go.
My favourite iOS App for this is the awesome Authy but there are plenty out there.
But the world doesn’t run on WordPress, suppose you want to do it yourself in a LAMP site…
Grab a copy of the PHPGangsta class
$ga = new PHPGangsta_GoogleAuthenticator();
$secret = $ga->createSecret();
echo "Your OTP Secret is: ".$secret."\n\nIt is probably a good idea to take a note of this";
echo "\nPlease scan in the QR code to setup your OTP ";
$qrCodeUrl = $ga->getQRCodeGoogleUrl('MyApp', $secret);
<IMG SRC='<?php echo $qrCodeUrl?>'>
<BR>
<?php
$oneCode = $ga->getCode($secret);
$checkResult = $ga->verifyCode($secret, $oneCode, 2); // 2 = 2*30sec clock tolerance
if ($checkResult) {
echo 'OK';
$sql="UPDATE localusers set GASecret='" . $secret . "' WHERE id=" . $userRow['id'];
mysqli_query($link,$sql);
} else {
echo 'FAILED';
}
if(!isset($userRow['GASecret']) || !isset($_REQUEST['e'])) { // Impossible to Authenticate
header('HTTP/1.1 401 Authentcation Impossible');
header('Content-Type: application/json; charset=UTF-8');
die(json_encode(array('message' => 'ERROR', 'code' => 1337)));
} else { // Try to authenticate
$ga = new PHPGangsta_GoogleAuthenticator();
$checkResult = $ga->verifyCode($userRow['GASecret'], $_REQUEST['e'], 2); // 2 = 2*30sec clock tolerance
if($checkResult) {
session_write_close();
session_start();
$_SESSION['OTP'] = 1;
session_write_close();
$result="Authenticated";
header('Content-Type: application/json');
die(json_encode($result));
} else {
header('HTTP/1.1 401 Authentcation Failed');
header('Content-Type: application/json; charset=UTF-8');
die(json_encode(array('message' => 'ERROR', 'code' => 1337)));
}
Obviously these are just snippets, which will never actually run for you, but you get the general idea.
It is so easy, it is just rude not to.
]]>It also represents really good value for money to my mind, and what better way to learn about it from the free usage tier (if you stay within the fairly generous limits it truly is free). Since the introduction of the t2.micro node, and general purpose SSD storage (replacing t1.micro, which was rather memory cramped, and our old friendly spinning rust) it is a serious piece of virtual hardware for a rather special price.
There is, however, no such thing as a one-size fits all answer. Perhaps you need a UK IP address. Perhaps you want a better pricing plan on TB of data in and out from your VPS. Perhaps you don’t need all the fancy infrastructure capabilities, but just want a few Linux boxen “in the cloud”. If so, you could do a lot worse than to look at linode.com. I first had a shell on a linode many many many years ago (it still works), and it seems to fit into the “it just works bucket”. Good price point (especially if data transfer is a worry for you), fast NIC speeds (getting over 100Mbps is challenging at this price level), ability to deploy images, a fabulous reporting/monitoring engine – Longview. And an API. Nobody should be touching anything that doesn’t have an API that you can do everything you need to through.
I do not, and have never, worked for either AWS or Linode, but they both have been wonderful providers to me and my clients time and time again.
]]>This means that the input get’s passed around through JS, AJAX, PHP and goodness only knows what before it turns up in the right place.
How do we make sure it’s safe to add to a SQL query?
Of course we can use PDO, but how about the general case?
$Words=str_replace(“\xA0″,” “, mysqli_real_escape_string($link,html_entity_decode(strip_tags(preg_replace(‘!\s+!’, ‘ ‘,trim($Words))))); $pieces=explode(” “, strip_tags($Words)));
Something just says this is plain wrong, but it’s working for me.
In this particular use case I’m trying to break up a user provided “sentence” into a set of words, which I then do stuff with.
So is particularly difficult to parse here when things get pasted.
I’m sure the above approach is wrong, would anyone like to tell me how to do it better?
]]>Still it does have a very useful role to play, even if some of the things it does just seem plain strange. Yes, EasyApache does give enormous flexibility, but so do the vendor provided packages.
Some days the only way to fix things is through SSH’ing into the server, and you have to be really careful to make sure that you don’t change something at the command line that WHM has it’s claws into.
suPHP seems to be the default handler (I can kind of understand why for multi-tenant hosting setups, but perhaps you should have a real sys-admin hired in that scenario?). It has a charming habit of doing the unexpected; todays head banging surprise came from wondering why php.ini settings were not getting applied.
After lots of grepping for ini-set statements, we eventually find an suPHP_config directive in .htaccess.
*sigh*
.htaccess has a lot to answer for, and if you are looking for real web performance you should _NEVER_ use .htaccess – put the configuration in the Apache configuration file where it belongs. The additional cycles Apache has to spend checking for the presence of .htaccess and parsing it if it is there will hurt you in the long run.
Allowing your “webmasters” to specify their own php.ini through .htaccess is just plain wrong.
Rant ends.
]]>Well, there are some things that can come back to bite you when you copy’n’paste stuff that you find with google.
One client was very patient with me today whilst some serious head scratching happened today whilst we tried to work out why we had broken the shopping cart on one vhost on the same server, but not another, with identical versions of opencart running in the background. I was all ready to give up and put in a “don’t cache this site/VHOST” entry into the VCL, when something caught my eye.
# Cache the following files extensions
if (req.url ~ “\.(css|js|png|gif|jp(e)?g|swf|ico)”) { unset req.http.cookie; }
Read that regexp carefully. Yes it doesn’t do quite what you expect it to.
if (req.url ~ “\.(css|js|png|gif|jp(e)?g|swf|ico)$”) { unset req.http.cookie; }
Works much more consistently, and more to the point as intended.
Pay attention to the detail, and remember there is always a reason for strange behaviour, the code only follows the rules we give it.
]]>