Warning: Undefined array key "HTTP_X_FORWARDED_PROTO" in /var/www/spa/wp-config.php on line 24
Matt Foster – Security Performance Architecture https://www.securityperformancearchitecture.co.uk Mon, 22 Mar 2021 13:02:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://www.securityperformancearchitecture.co.uk/wp-content/uploads/2018/06/cropped-favicon-1-192x192_dd7056e93dc7dfe9a63610e24a36e689-50x50.png Matt Foster – Security Performance Architecture https://www.securityperformancearchitecture.co.uk 32 32 BT SmartHub 2 Monitoring and Statistics https://www.securityperformancearchitecture.co.uk/bt-smarthub-2-monitoring-and-statistics/ https://www.securityperformancearchitecture.co.uk/bt-smarthub-2-monitoring-and-statistics/#respond Mon, 22 Mar 2021 13:02:40 +0000 https://www.securityperformancearchitecture.co.uk/?p=315 Ever wondered why your super fast broadband isn’t living up to the promise?
Got a shiny new BT SmartHub2 connected to a fibre service, but can’t really tell what’s using it?
Poor Wi-Fi connection stopping you unleashing the full potential?

We have the solution, with our network monitoring and analytics apps available on the Apple Store:

https://apps.apple.com/gb/app/my-broadband-stats/id1551544602

After a quick setup period, our app will collect statistics from your router, and provide you with a set of rich intuitive graphs that tell you everything you need to know about what’s using your home network.

We are currently offering a free trial to our already value for money subscriptions to let you see the full potential at no cost to yourself. So why not give it a go today.

]]>
https://www.securityperformancearchitecture.co.uk/bt-smarthub-2-monitoring-and-statistics/feed/ 0
Website Performance https://www.securityperformancearchitecture.co.uk/website-performance/ https://www.securityperformancearchitecture.co.uk/website-performance/#respond Wed, 06 Jun 2018 23:46:54 +0000 https://www.securityperformancearchitecture.co.uk/?p=179 A lot of the techniques I use to help you optimise your WordPress site get tested out here first.

This certainly used to be known as “Dog-Fooding” in more than one big IT setup. Still I’m quite happy with how this site runs, and hopefully so will you.

Although this specific to WordPress, a lot of the technology used is applicable to web hosting in general. I know how to squeeze the most out of your VPS to run Magento or just about anything else you like.

This site is hosted on an Amazon AWS t2.micro Instance, so no-one can accuse me of over specifying the hardware. If you haven’t used AWS before then you will qualify for the AWS free usage tier and I’d be delighted to help you migrate over to it.

Enough talk, here are some detailed measurements of the performance of this site.

]]>
https://www.securityperformancearchitecture.co.uk/website-performance/feed/ 0
Monitoring TalkTalk Router Bandwidth https://www.securityperformancearchitecture.co.uk/monitoring-talktalk-router-bandwidth/ https://www.securityperformancearchitecture.co.uk/monitoring-talktalk-router-bandwidth/#respond Wed, 13 Apr 2016 17:23:15 +0000 https://www.securityperformancearchitecture.co.uk/?p=79 Having treated myself to a 4K TV recently, and the fact there is _some_ 4K or UHD content available through Amazon Prime and Netflix I wondered what the actual bandwidth requirements of streaming this kind of stuff down are.

No problem, I’ll just sling up the excellent MRTG and find out I thought.

Oh no, it’s not that easy. I have an FTTC service provided by TalkTalk. The VDSL modem/router is a “Super Router” also known as the Huawei HG633. Running firmware v1.15t it has neither SNMP nor telnet/SSH or any other kind of CLI access. Bit of a dead end really. Still not to worry, I only use the HG633 to terminate the VDSL, it has an Ethernet uplink to an Apple AirPort Extreme that provides Wi-Fi for the house and a couple of gigabit connected wired devices (thanks TalkTalk for providing me with an 80/20 Mbps WAN product and only 100Mbps LAN side). Apple however have also removed SNMP capability from the AirPort range. *GRR*. Now the obvious solution is to get a proper modem/router/access point but these things are sent to challenge us. The HG633 has a tolerable web admin interface, which does expose some statistics, so we can surely yank those out with a bit of patience.

Turns our it is all JavaScript based in the HG633, but no worries, the excellent PhantomJS to the rescue. Lurking on the home LAN is a Raspberry PI Model 3, which proves to be more than up to the task of driving this headless JavaScript engine. After I little bit of tinkering I was able to generate a PhantomJS script which would login to the router, navigate to the appropriate page, and then dump the DOM out. Judicious use of text parsing results in getting the required information out of the admin gui, and which point it’s trivial to feed it to MRTG.

The results can be seen at http://mattfoster.noip.me/mrtg/

The code is ugly, doesn’t really cope with error conditions all that well, and is heavily dependent on some of the DOM structure in the router’s management page which will doubtless get screwed the next time TalkTalk pushes down a firmware update. Still perhaps the next firmware update will re-enable the CLI.

When there is a will, there is a way even if it is a slightly stupid one, which certainly fails to deal with asynchronous requests properly or even work all the time.

I hesitate to even publish the code, but as it was an annoying enough problem to “solve” the PhantomJS script is available router.js.txt And the horrible bash script called by MRTG mrtg-router.sh.txt

UPDATE FOR 1.18t

Since the Huawei HG633 was updated to firmware 1.18t the scripts broke (no surprise really, given the lack of API and HTML scraping. The updated JS script is available router-1.18t.js.txt now.

]]>
https://www.securityperformancearchitecture.co.uk/monitoring-talktalk-router-bandwidth/feed/ 0
Backup to AWS S3 with s3cmd https://www.securityperformancearchitecture.co.uk/backup-to-aws-s3-with-s3cmd/ https://www.securityperformancearchitecture.co.uk/backup-to-aws-s3-with-s3cmd/#respond Sat, 14 Mar 2015 19:31:05 +0000 https://www.securityperformancearchitecture.co.uk/?p=117 Particularly since the introduction of Glacier, S3 from Amazon is quite attractive as an offsite backup offering (archive the backups to Glacier automatically after, say, a week with lifecycle management and your storage costs drop dramatically).

Of course, we still have to keep an eye on our data transfer costs. There are two possible candidates for backing up our Linux Server/VPS to S3 that I’ve seen and used in the past, either: s3cmd or s3fs

S3FS certainly feels nice, and we can rsync to it in the normal way, but (and it is potentially a huge but – no pun intended) AWS S3 data charges are not just for storage, but also bandwidth transferred, and perhaps critically the number of requests made to the S3 API. I freely confess to having doing zero measurement on the subject, but it just feels instinctive that a FUSE filesystem implementation is going to make way more API calls than the python scripts that call the API directly that are s3cmd.

So using the rsync like logic you might consider doing something like:

cd /var/www/
s3cmd sync -r vhosts --delete-removed s3://$BUCKET/current/vhosts/

There is a small snag however to this approach. s3cmd keeps the directory structure in memory to help it with the rsync logic. This is fine if you are on real tin, with memory to spare. But on a VPS, especially an OpenVZ based one where there is no such thing as swap, this can be a real show stopper for large directory structures as the hundreds of MB of RAM required just are not available. Time for our old friend the OOM killer to rear it’s head ?

Recursion of some form would be the elegant answer here. However elegance is for those with time for it, and the following seems to work very effectively with minimal RAM consumption:

cd /var/www
for i in `find . -type d -links 2 | sort | sed -e 's/\.\///g'`
do
s3cmd sync -r $i/ --delete-removed s3://$BUCKET/current/vhosts/$i/
done

The find command looks for directories which only contain two directories (. and ..), that is to say they are the end nodes of a directory tree. And then we back them up, one by one.

Simples.

]]>
https://www.securityperformancearchitecture.co.uk/backup-to-aws-s3-with-s3cmd/feed/ 0
Google Authenticator with PHP https://www.securityperformancearchitecture.co.uk/google-authenticator-with-php/ https://www.securityperformancearchitecture.co.uk/google-authenticator-with-php/#respond Wed, 04 Mar 2015 19:36:17 +0000 https://www.securityperformancearchitecture.co.uk/?p=119 Gone are the days of SecureID OTP tokens costing an arm and a leg, and being just for Enterprise.

My own WP site here is protected with Google Authenticator, and there is no excuse for not doing the same on yours.  Just grab the awesome WP Google Authenticator plugin and you will be good to go.

My favourite iOS App for this is the awesome Authy but there are plenty out there.

But the world doesn’t run on WordPress, suppose you want to do it yourself in a LAMP site…

Grab a copy of the PHPGangsta class

Creating users:

$ga = new PHPGangsta_GoogleAuthenticator();
$secret = $ga->createSecret();
echo "Your OTP Secret is: ".$secret."\n\nIt is probably a good idea to take a note of this";
echo "\nPlease scan in the QR code to setup your OTP ";
$qrCodeUrl = $ga->getQRCodeGoogleUrl('MyApp', $secret);

<IMG SRC='<?php echo $qrCodeUrl?>'>
<BR>

<?php
$oneCode = $ga->getCode($secret);
$checkResult = $ga->verifyCode($secret, $oneCode, 2);    // 2 = 2*30sec clock tolerance
if ($checkResult) {
echo 'OK';
$sql="UPDATE localusers set GASecret='" . $secret . "' WHERE id=" . $userRow['id'];
mysqli_query($link,$sql);
} else {
echo 'FAILED';
}

Authenticating users:

if(!isset($userRow['GASecret']) || !isset($_REQUEST['e'])) { // Impossible to Authenticate
header('HTTP/1.1 401 Authentcation Impossible');
header('Content-Type: application/json; charset=UTF-8');
die(json_encode(array('message' => 'ERROR', 'code' => 1337)));
} else { // Try to authenticate
$ga = new PHPGangsta_GoogleAuthenticator();
$checkResult = $ga->verifyCode($userRow['GASecret'], $_REQUEST['e'], 2);    // 2 = 2*30sec clock tolerance
if($checkResult)  {
session_write_close();
session_start();
$_SESSION['OTP'] = 1;
session_write_close();
$result="Authenticated";
header('Content-Type: application/json');
die(json_encode($result));
} else {
header('HTTP/1.1 401 Authentcation Failed');
header('Content-Type: application/json; charset=UTF-8');
die(json_encode(array('message' => 'ERROR', 'code' => 1337)));
}

Obviously these are just snippets, which will never actually run for you, but you get the general idea.

 

It is so easy, it is just rude not to.

]]>
https://www.securityperformancearchitecture.co.uk/google-authenticator-with-php/feed/ 0
Other Service Providers are also Available https://www.securityperformancearchitecture.co.uk/other-service-providers-are-also-available/ https://www.securityperformancearchitecture.co.uk/other-service-providers-are-also-available/#respond Tue, 03 Mar 2015 20:00:24 +0000 https://www.securityperformancearchitecture.co.uk/?p=129 Anyone who has worked with me in the past couple of years will know that I have a very strong preference for recommending Amazon AWS as your IaaS provider of choice.  It is mature, robust, performant, and has a whole raft of PaaS type features to make things easy and lower the sysadmin burden/requirement.

It also represents really good value for money to my mind, and what better way to learn about it from the free usage tier (if you stay within the fairly generous limits it truly is free).  Since the introduction of the t2.micro node, and general purpose SSD storage (replacing t1.micro, which was rather memory cramped, and our old friendly spinning rust) it is a serious piece of virtual hardware for a rather special price.

There is, however, no such thing as a one-size fits all answer.  Perhaps you need a UK IP address.  Perhaps you want a better pricing plan on TB of data in and out from your VPS.  Perhaps you don’t need all the fancy infrastructure capabilities, but just want a few Linux boxen “in the cloud”.  If so, you could do a lot worse than to look at linode.com.  I first had a shell on a linode many many many years ago (it still works), and it seems to fit into the “it just works bucket”.   Good price point (especially if data transfer is a worry for you), fast NIC speeds (getting over 100Mbps is challenging at this price level), ability to deploy images, a fabulous reporting/monitoring engine – Longview.  And an API.  Nobody should be touching anything that doesn’t have an API that you can do everything you need to through.

I do not, and have never, worked for either AWS or Linode, but they both have been wonderful providers to me and my clients time and time again.

]]>
https://www.securityperformancearchitecture.co.uk/other-service-providers-are-also-available/feed/ 0
Sanitising User Input https://www.securityperformancearchitecture.co.uk/sanitising-user-input/ https://www.securityperformancearchitecture.co.uk/sanitising-user-input/#respond Thu, 29 Jan 2015 15:57:58 +0000 https://www.securityperformancearchitecture.co.uk/?p=286 Some days you need to get user input from a bit of an HTML form that wasn’t really designed for it, in order to get a great UX.

This means that the input get’s passed around through JS, AJAX, PHP and goodness only knows what before it turns up in the right place.

How do we make sure it’s safe to add to a SQL query?

Of course we can use PDO, but how about the general case?

$Words=str_replace(“\xA0″,” “, mysqli_real_escape_string($link,html_entity_decode(strip_tags(preg_replace(‘!\s+!’, ‘ ‘,trim($Words))))); $pieces=explode(” “, strip_tags($Words)));

Something just says this is plain wrong, but it’s working for me.

In this particular use case I’m trying to break up a user provided “sentence” into a set of words, which I then do stuff with.
So is particularly difficult to parse here when things get pasted.

I’m sure the above approach is wrong, would anyone like to tell me how to do it better?

]]>
https://www.securityperformancearchitecture.co.uk/sanitising-user-input/feed/ 0
WHM Things to be Aware of https://www.securityperformancearchitecture.co.uk/whm-things-to-be-aware-of/ https://www.securityperformancearchitecture.co.uk/whm-things-to-be-aware-of/#respond Tue, 27 Jan 2015 16:11:34 +0000 https://www.securityperformancearchitecture.co.uk/?p=292 I’ve neer been an enormous fan of WHM, in the long run it pays to know what you are doing.

Still it does have a very useful role to play, even if some of the things it does just seem plain strange. Yes, EasyApache does give enormous flexibility, but so do the vendor provided packages.

Some days the only way to fix things is through SSH’ing into the server, and you have to be really careful to make sure that you don’t change something at the command line that WHM has it’s claws into.

suPHP seems to be the default handler (I can kind of understand why for multi-tenant hosting setups, but perhaps you should have a real sys-admin hired in that scenario?). It has a charming habit of doing the unexpected; todays head banging surprise came from wondering why php.ini settings were not getting applied.

After lots of grepping for ini-set statements, we eventually find an suPHP_config directive in .htaccess.

*sigh*

.htaccess has a lot to answer for, and if you are looking for real web performance you should _NEVER_ use .htaccess – put the configuration in the Apache configuration file where it belongs. The additional cycles Apache has to spend checking for the presence of .htaccess and parsing it if it is there will hurt you in the long run.

Allowing your “webmasters” to specify their own php.ini through .htaccess is just plain wrong.

Rant ends.

]]>
https://www.securityperformancearchitecture.co.uk/whm-things-to-be-aware-of/feed/ 0
The Importance of Reading the RegExp Properly https://www.securityperformancearchitecture.co.uk/294/ https://www.securityperformancearchitecture.co.uk/294/#respond Wed, 14 Jan 2015 16:20:38 +0000 https://www.securityperformancearchitecture.co.uk/?p=294 I did mention that my varnish 4.0 configuration was pretty much out of the box here.

Well, there are some things that can come back to bite you when you copy’n’paste stuff that you find with google.

One client was very patient with me today whilst some serious head scratching happened today whilst we tried to work out why we had broken the shopping cart on one vhost on the same server, but not another, with identical versions of opencart running in the background. I was all ready to give up and put in a “don’t cache this site/VHOST” entry into the VCL, when something caught my eye.

# Cache the following files extensions
if (req.url ~ “\.(css|js|png|gif|jp(e)?g|swf|ico)”) { unset req.http.cookie; }

Read that regexp carefully. Yes it doesn’t do quite what you expect it to.

if (req.url ~ “\.(css|js|png|gif|jp(e)?g|swf|ico)$”) { unset req.http.cookie; }

Works much more consistently, and more to the point as intended.

Pay attention to the detail, and remember there is always a reason for strange behaviour, the code only follows the rules we give it.

]]>
https://www.securityperformancearchitecture.co.uk/294/feed/ 0