FileSystem How-To

Introduction

We’re presently developing a mobile application using Cordova and the whole process has been brilliant! Most recently we’ve been using the FileSystem of the mobile device we’re using (more often than not a virtual Android device using Genymotion) so we’ve had to get our heads around the File plugin.

My team is comprised of fullstack (with a leaning towards the LAMP and MEAN stacks) developers and front-end developers so this is why we’ve had some problems getting our heads around this exciting world of the FileSystem. We’re used to interacting with servers and putting stuff up there and not having to think about storing things on a user’s machine… apart from session cookies (and even then we’ll use the language’s built in abilities more often than not). To an extent it almost felt a little dirty to get into the business of storing files on a machine.

In order to get started we looked at the plugin documentation and, when that started to get a little dry, we looked at HTML5 Rocks, and that was much more exciting! It must be noted though that the FileSystem API is only available on Chrome at present, but that’s cool as we primarily develop in Chrome (I often spend more time coding in the console rather than my IDE).

While it was exciting we came across any number of head-scratching moments so I’ve decided to document them here, I hope it helps!

Getting access

It’s all well and good to be able to play with the FileSystem but how do you get started? The first thing to do is to ask whether or not you can play!

Because we’re developing on a virtual device and the Chrome browser (thankfully it’s the main browser I use for development as – as I noted above – it’s the only major browser to support the FileSystem API) we need to ask permission nicely. This did cause us some problems to start with but the thing to remember is that Chrome doesn’t support the proper requestFileSystem function but has its own. It’s not a major issue but it’s still a concern. Another thing to bear in mind is that your Cordova app should always include the cordova.js library (it won’t be there unless the app is built but if it’s the last external JavaScript file you call then it’s not an issue in development), because the cordova.js file adds a cordova object to the window object we can test on that to tell whether or not we need to do the substitution for Chrome. This is in the main file of our project and it works a treat:

var requestedBytes = 1024*1024*10; // 10MB
if(!window.cordova){
    window.requestFileSystem = window.requestFileSystem || window.webkitRequestFileSystem;
    navigator.webkitPersistentStorage.requestQuota (
        requestedBytes,
        function(grantedBytes) {
            window.requestFileSystem(
                PERSISTENT,
                requestedBytes,
                function(fs){
                    window.gFileSystem = fs;
                    fs.root.getDirectory(
                        "base",
                        {
                            "create": true
                        },
                        function(dir){
                        	console.log("dir created", dir);
                        },
                        function(err){
                        	console.error(err);
                        }
                    );
                },
                function(err) {
                    console.error(err);
                }
            );
        },
        function(err) {
            console.error(err);
        }
    );
}else{
    document.addEventListener(
        "deviceready",
        function() {
            window.requestFileSystem(
                PERSISTENT,
                requestedBytes,
                function(fs) {
                    window.gFileSystem = fs;
                    fs.root.getDirectory(
                        "base",
                        {
                            "create": true
                        },
                        function(dir){
                        	// alert("dir created");
                        },
                        function(err){
                        	alert(JSON.stringify(err));
                        }
                    );
                },
                function(err){
                    alert("Error: ", JSON.stringify(err));
                }
            );
        },
        false
    );
}

This is somewhat lengthy but I like the indentation as it gives me a nice indication of what’s happening and where. In the function which uses the Cordova plugin we’re using alerts whereas we use console.logging on Chrome. In order for the alerts to make sense I’m converting the err objects into strings. You can do this yourself now to test by simply entering this in the console (just so long as you’re reading this in Chrome):

window.requestFileSystem = window.requestFileSystem || window.webkitRequestFileSystem;
navigator.webkitPersistentStorage.requestQuota (
    1024*1024*10,
    function(grantedBytes) {
        window.requestFileSystem(
            PERSISTENT,
            1024*1024*10,
            function(fs){
                window.gFileSystem = fs;
                fs.root.getDirectory(
                    "base",
                    {
                        "create": true
                    },
                    function(dir){
                    	console.log("dir created", dir);
                    },
                    function(err){
                    	console.error(err);
                    }
                );
            },
            function(err) {
                console.error(err);
            }
        );
    },
    function(err) {
        console.error(err);
    }
);

Hopefully you’ll see an alert asking for permission to use Local data storage and, once you’ve granted access, the console should print out these 2 lines:

undefined
dir created

The undefined means that the function doesn’t return anything and it isn’t anything to worry about but the dir created means we’ve created a base directory How cool!

In order to better see the effects of what you’ve done download the HTML5 FileSystem Explorer Extended Chrome (FileSystem Explorer) plugin. Once you’ve installed it you’ll be able to see an empty base directory in your permanent storage. If you have a way of serving files (and who doesn’t?) you could do worse than looking at the HTML5 Rocks page I linked to above and running the sample code. It’s really rather cool!

Working with Directories

I like to keep things somewhat organised, I guess I’m something of a neat freak, but that’s okay really as it means I don’t often lose things. As such I’d like to keep the files I want to work with organised. We also want our application to be enclosed in one folder so that if/when we make another app we can distinguish between documents hierarchies. So we’ll create a base folder as soon as we get told we can play with the FileSystem. Within that folder we can create other folders. This way we can create a whole taxonomy of our applications concerns and we’ll be able to CRUD things later confident that we know where things should go.

This process is somewhat philosophical though so I’d suggest giving it some thought. Coming from a LAMP background means that I’m all in favour of normalisation and believe in repeating myself as little as possible, being thrust into the world of MEAN means that I’m coming to embrace the “messy” approach of replicating data as and when needed as well as expanding database rows as I need to. This does require something of a shift from rigid to flexible thinking but getting my head around both ways of working has helped me become less programmatically autistic.

Once we’ve got a reference to our base directory we can start creating a folder tree with base being the trunk. This then allows us to create document leaves. One thing to remember though is that we can get directories and files… and we can get a reference to them even if they don’t exist by telling the API to create them if they don’t exist. But we do need to be careful about creating documents in folders that don’t yet exist and this is the one big gotcha I’d like to point out to you in this piece. I’d like to tell you how we got around it too.

First though let’s look at an example of getting a file and creating it if necessary:

gFileSystem.root.getFile(
    "base/test.txt",
    {
        create:true
    },
    function(file){
        console.log("Created File", file);
    },
    function(err){
        console.error(err);
    }
);

Again, you’ll have to excuse the formatting (this is the last time I apologise about it though – I know we could do this as a one-liner but I like this passing of functions and I really do think functions deserve their own line… and if functions deserve their own lines surely objects and strings do too?). Hopefully if you enter this into the console you’ll get a nice message saying Created File and an FileEntry object. Go ahead and expand that FileEntry object and check that it has these attributes: filesystem: DOMFileSystem, fullPath: "/base/test.txt", isDirectory: false, isFile: true and name: "test.txt".

That’s grand isn’t it? We’ve created a file… but we might want to write to it. We can do that by using a FileWriter on the FileEntry:

gFileSystem.root.getFile(
    "base/test.txt",
    {
        create:true
    },
    function(file){
        console.log("Got File", file);
        file.createWriter(
            function(fileWriter) {
                fileWriter.onwriteend = function(progress) {
                    console.log("Write completed", progress);
                };
                fileWriter.onerror = function(err) {
                    console.error("Write failed", err);
                };
                var blob = new Blob(
                    ['Lorem Ipsum'],                     {
                        type: 'text/plain'
                    }
                );
                fileWriter.write(blob);
            },
            function(err){
                console.error("Error creating writer", err)
            }         );
    },
    function(err){
        console.error(err)
    }
);

If you now navigate to base/test.txt in FileSystem Explorer and click on the file Chrome should open an new file with “Lorem Ipsum” in it. Cool ehh?

There are all sorts of other things that you can do with a fileWriter like append lines or save binary data like images. Have an explore but do remember that once you have a reference to the file the world’s your lobster!

Gotcha

So we can write to a file once we have a reference to it, whether or not it exists, and we can use directories once we have a reference to them, whether or not they exist… what about getting a reference to a file that doesn’t yet exist within a directory that doesn’t yet exist. This is the major issue we dealt with and had us stumped for quite a while. If you replace base/test.txt with base/test/test.txt you’ll get a lovely FileError object on the console with these attributes: code: 1, message: "A requested file or directory could not be found at the time an operation was processed." and name: "NotFoundError".

Clear as mud ehh?

Basically we need to make sure that the directory we’re looking in exists before we can look for the file because while we set create to true for the getting of the file, we don’t set create to true for the directory because we aren’t using getDirectory. Phew! HTML5 Rocks has a lovely function for just this situation called createDir which accepts a DOMFileSystem object and an array representing a directory path (i.e. in our example: ["base", "test"]):

var path = 'base/test';

function createDir(rootDirEntry, folders) {
    // Throw out './' or '/' and move on to prevent something like '/foo/.//bar'.
    if (folders[0] == '.' || folders[0] == '') {
        folders = folders.slice(1);
    }
    rootDirEntry.getDirectory(
        folders[0],         {
            create: true
        },         function(dirEntry) {
            // Recursively add the new subfolder (if we still have another to create).
            if (folders.length) {
                createDir(dirEntry, folders.slice(1));
            }
        },         function(err){
            console.error(err);
        }
    );
};

createDir(gFileSystem.root, path.split('/'));

This is brilliant but we needed a reference to a file so we could write to it so we came up with this:

var path_file = "base/test/test.txt";

function createDirAndFile(rootDirEntry, folders, callback) {
    if (folders[0] == '.' || folders[0] == '') {
        folders = folders.slice(1);
    }
    rootDirEntry.getDirectory(
        folders[0],         {
            create: true
        },         function(dirEntry) {
            if (folders.length > 2) {
                createDirAndFile(dirEntry, folders.slice(1), callback);
            }else{
                callback();
            }
        },         function(err){
            console.error(err);
        }
    );
};

createDirAndFile(
    gFileSystem.root,     path_file.split('/'),
    function(){
        gFileSystem.root.getFile(
            path_file,
            {
                create:true
            },
            function(file){
                console.log("Got File", file);
                file.createWriter(
                    function(fileWriter) {
                        fileWriter.onwriteend = function(progress) {
                            console.log("Write completed", progress);
                        };
                        fileWriter.onerror = function(err) {
                            console.error("Write failed", err);
                        };
                        var blob = new Blob(
                            ['Lorem Ipsum'],                             {
                                type: 'text/plain'
                            }
                        );
                        fileWriter.write(blob);
                    },
                    function(err){
                        console.error("Error creating writer", err)
                    }                 );
            },
            function(err){
                console.error(err)
            }
        )        }
);

It’s lovely because we use recursion to create the directory structure (in the same way as createDir did) before writing the file… it wasn’t of a great deal of use in our most recent application as the hierarchy is really quite shallow but I think it will be of use later on. Go ahead and try it in the console, you should end up with the expected directory structure when you look at it with FileSystem Explorer.

Conclusion

The FileSystem API ROCKS! It can cause something of an existential crisis for those of us who not used to working with a FileSystem – except on the server – but once you get your head around it it really does make a lot of sense. Most of the articles use an error handler to deal with the errors that can occur but, if you’re looking at the console trying to debug what went wrong where, the error handler function generally doesn’t give you a line number. This is really, really annoying and we’ve ended up commenting out the internals of our functions to try and track down the bug, so we went back to adding the console.logs as methods to the relevant functions.

Posted in Development

Cloud Platform: A Shifting Landscape (Part 1)

 – A paradigm shift is no longer just a possibility, but a reality –  

The 3 Ds ‘Design, Development and Delivery’ have always been at the forefront of technology, and never more so than today.

The landscape is changing once again. In the past, the public sector (along with the private) moved away from inflexible mainframe applications, to a client/server app model. This was made possible through the extensive use of the PC desktop acting as the client.  The reasons for this move were because of a more cost effective delivery model together with far greater ease of use.  The problem though is that this also caused greater complexity, and over time associated costs increased, as the number of applications went up.

Then, there was another wave of improvement. Server and Desktop Virtualization, where applications and servers were delivered through a simpler, thin client. While improving things a lot, this required even greater complexity at the datacenter, with variable demands, unpredictable usage patterns and most importantly, still largely tied users to their base.

Today, most applications are becoming “as a Service” based, delivered entirely through a browser. This new fundamental shift affects almost all areas of formerly desktop applications from productivity and email, to complex specialist and line of business applications.

Furthermore, this standardization on the web browser as “client”, affords us a unique opportunity to reduce complexity across our IT infrastructure by no longer running the servers.  So as before the reasons, are based on cost savings and ease of delivery, and now through the utilization of web-based SOAs (service-orientated architectures) such as PaaS (platform as a service) and SaaS (software as a service), which relies on fast, reliable and affordable resources, i.e. public cloud platforms such as force.com from Salesforce and AWS Amazon Web Services.

This paradigm shift purports to do-away with complexity, and as with every shift a new set of technical challenges awaits.  The problem as with the move away from in-house / non-scloud environments is that existing applications are not designed for a multi-tennant service platform.

But a major IT win, especially for the Public Sector is the opportunity to create new applications that reflect the changing needs of their user community.    The disbandment of complexity and the resulting reduction of costs is an added bonus.

Denis Kaminskiy,  CEO Arcus Global

Tagged with: , , , , ,
Posted in Cloud Computing

10 reasons the removal of external accreditation from G-Cloud is a good thing.

Earlier this month, we have received confirmation the GDS and CCP were removing accreditation requirements from the G-Cloud catalogue. I always pay close attention to what is communicated about the G-Cloud framework. More than 40% of our revenue came from G-Cloud last year, and we are selling more and more through the framework every week. As 80% of our business is Public Sector, we have been huge beneficiaries of the GDS SME programmes and G-Cloud in particular.

The removal of the requirements to accredit (IL2 and above) follows the complete overhaul of the security standards applied to government data in April this year. Gone are the confusing Impact Levels (interpreted differently by every single government client I have ever met), replaced by the simpler system, which in my view opens up a lot of options, especially at the Local Government level. It also removes “badges” from the G-Cloud store, and replaces them with a self-certification process (which we are yet to see) from G-Cloud 6 onwards.

I have heard views that this move may make it harder from SMEs to compete, as it removes a potential distinction, and may make the buyers more nervous. Everyone can potentially offer self-certified solutions, suitable for anything they wish to make it suitable for. Hence, it may be harder for SMEs to distinguish themselves, or to provide assurance, and the trust may naturally shift to larger players.
I believe that this is not the case. Here are my 10 reasons as to why this is a good thing.

1) Having an IL3 or IL2 certified badge on G-Cloud, while looking good, was of little value to the buyer. Whatever certification supplier brings, the buyer still needs to do a full risk assessment of the solution and of their overall environment. If a breach occurs, ICO or anyone else will not care whether the supplier was accredited – only whether the risk assessment was done and procedures were followed for the solution / implementation. You can implement an accredited solution in a bad way, and equally an unaccredited solution may provide much higher overall level of security of designed correctly.
2) Sure, saying you will only deal with IL3 certified people, makes life a little easier at procurement, but it also means that you disregard solutions that may be even more secure, cheaper or more elegant and simpler, in order to satisfy an artificial requirement. Remember, having a badge, means nothing at all to you – all that matters is how you architect it. Some services demand accreditation sometimes, but even that is starting to disappear, and we have always been successful in having the argument that what we have designed / proposed is more secure than a “vanilla” IL3 would provide.
3) Certification process was long, and GDS had to have resources to satisfy it – the more suppliers, the more resources. If 80% of suppliers sold nothing through the programme, then 80% of certification resources are wasted. And why should all buyers pay for the (often unnecessary) requirements of the few? In many cases IL3 certification was demanded by lazy security managers (see point 1 and 2).
4) The certification was done at a point in time – like all accreditation it was never a substitute for understanding the service, technology or delivery model – which may change, and be different in 6 months – further undermining the value of the badge. Given that G-Cloud contract last 2 years, and the catalogue entry lasts 1 year, so potentially a buyer could believe a 3 year old badge, which is probably useless…
5) For PSN or to achieve their own accreditation for the overall solution, buyers saved very little time in dealing with accredited suppliers – they still needed the underlying information, not just the badge. Good suppliers maintain and understand the information, without necessarily certifying it externally.
6) Most importantly (and I have seen this happen), the badges gave the impression that solution (however its implemented) was suitable for the purpose, and that was simply not true – in fact it aided the confusion and suggested that internal process could somehow be simplified or avoided if only you went with an accredited supplier. It also made security look like a “gated” static process, as opposed to a dynamic, constant and continuous assessment.
7) None of our existing clients have ever asked us for a certification – our competitive advantage has always been in flexibility, price and depth of technology understanding, and it remains there, whether certain solutions are certified. Helping each one of our customers to achieve the right level of security is always part of what we do, as part of every project, so we never saw the reason for doing it “externally” for general purpose.
8) External accreditations (such as ISO27001:2013) provide the same or similar level of assurance, and help demonstrate the company is serious about security internally – and they are “supplier wide”, not simply targeting one offering from them. Hence they are a better guide for buyers, in deciding whether they are dealing with a serious company or not.
9) In terms of competition, the difference between suppliers is very clear – just try inviting 10 people to an RFP. Larger companies tend to have a completely different outlook, attitude and approach to customers, and an SME having a badge (or not) is completely irrelevant in that process. We have the ability to quickly make decisions, start partnerships, put fees at risk, and work on a share reward basis, or do pretty much anything the client wants. If the buyer values this, then they will continue to do so, if not, then they are probably never be our clients, whether we are certified or not.
10) Not having badges, increases the number of potential suppliers for each requirement. On the one hand, this can be bad – too many, and the buyer can struggle to process them properly, but on the other hand, it should force them to look at other factors more, and more competition is always good in the long run. I believe that after a while, suppliers who are not serious about working with the Public Sector will start (this has already begun) to fall away, exposing the ones that are more committed and focused on this market.
The above are my views, and I do not have a deep security background, but purely from commercial / business perspective, I think the change is positive, and will help Arcus and other companies that want to do business in the sector and compete fairly, and robustly.

Posted in Company News

A new Portal from Janet, AWS and Arcus Global.

A new Portal for higher education and research.

Arcus are proud to be part of the launch of a new Portal that enables users in the Janet community to procure AWS cloud services with new, additional benefits. The Portal provides a coherent admins facility and offers:

  • monthly invoicing (no credit cards required for payment)
  • itemized billing, consolidated across users/departments
  • billing in GBP, not dollars
  • the ability to set spending-limits per user, or by department
  • volume-discounts through aggregation across multiple HE institutions

You can watch a webinar about the new Portal here where you will hear a short overview of AWS technologies, and how they are being used by universities, researchers and schools around the world to reduce costs, shorten academic projects, and increase the speed and impact of research. You will also find how to sign up for an account on the Portal.

The webinar is aimed at any user in the Janet community (in HE, FE or schools) with an interest in AWS (whether or not an existing AWS user), and at System Administrators who could act as central points of contact for consolidated, multi-user billing.

You can register for an account and visit the portal here.

 

Capture3

Posted in Company News

Five years in the Cloud.

From humble beginnings

Starting out in the back of the old company BMW, Denis Kaminskiy and Lars Malmqvist travelled the length and breadth of England presenting their new and innovative idea to local and central government departments. Six months later, they moved into their first office at Castle Hill, Cambridge and began to expand the team.

From their small original office on the first floor, that totalled 100 square ft with 2 desks, they eventually moved to a larger office, in the same building.  When the team grew to over 20 people, they moved again, this time to the largest space available at Castle Hill.  The rapid growth continued, and in December 2013 they moved to their current location at the Future Business Park in Cambridge.  In July 2014, with over 40 members of the team, Arcus will be moving again, still in the FBC, but to a new, larger office space.

And now….?

Arcus is proud of how far it has come over the last five years. With over 70 Public Sector clients, the company is now a leading Public Sector Partner for innovators such as Salesforce.com and AWS. Arcus has a turn over £2 million and has achieved a 100% growth rate in the past years.

VLUU L200  / Samsung L200

Happy 3rd Birthday to Arcus

182326_10151003279443627_1417911868_n (1)

The original office…

The kitchen

The new office (kitchen)

 

 

Products, ideas and awards

Arcus Global won its first award in 2011 for their DataTap product.  In 2011, Arcus won the Cambridge Weekly Award for the Cambridge Graduate Business of the Year.

Since then, Arcus have gone on to develop:


Arcus Social

Children’s Centre Management


Arcus Inform

Information Asset Register


Arcus Environment

Building Control

The Future

For this innovative and fast moving company, the future holds many opportunities.  New sectors, products and technology, and maybe even international expansion!

You can discover more about Arcus Global and Arcus products on our YouTube video.

Posted in Company News

Update SalesForce rich text area in a VisualForce page

I’ve just come across this post by Matt Lacey, a day too late as I discovered this yesterday!

Basically the issue is that Salesforce has rich text fields which are not in the least bit easy to re-render. A colleague had a requirement to change the text within the rich text area depending upon the user changing a select value, the base starting points for the rich text area were contained within hidden fields on the form but he needed to update the rich text area upon the select being changed without Salesforce complaining too much.

I inspected the elephant and clocked that the ids he had given to various apex components were coming through properly but that they were mixed up in some sort of colon-and-alphabet soup. That’s where I got to thinking about jQuerys selectors and had a wee brain wave… the missing piece then was looking at the CKEDITOR object and its setData method and we were in business.

Given this structure on the visualforce page:

<apex:pageBlockSection rendered="{!NOT(draftMode)}">
  <apex:selectList 
    label="Request Description source:"
    value="{!descriptionSelected}" 
    multiselect="false" 
    size="1"
    id="descOption">
    <apex:selectOptions 
      value="{!descriptionList}" />
  </apex:selectList>
</apex:pageBlockSection>

 ... 

<apex:pageBlockSection>
  <apex:inputField 
    label="Request" 
    value="{!pagePubRec.Published_Request__c}"
    id="requestText" />
</apex:pageBlockSection>

 ... 

<apex:inputHidden 
  id="customerDescription" 
  value="{!customerDescription}"
  />
<apex:inputHidden 
  id="caseDescription" 
  value="{!caseDescription}" />

We can use this javascript:

<script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script>
  $(function(){
    // Select the select that ends with the id of 'descOption'
    // and listen to it changing...
    $("[id$='descOption']").on("change", function(){
      // Probably not needed but better safe than sorry.
      var $this = $(this);
      // Get the full id if the textarea which contains 'requestText',
      // including all the salesforcie stuff, as that'll be the reference
      // we'll use to grab the correct CKEDITOR instance.
      var cke_editor_instance = $("textarea[id*='requestText']").attr("id");
      if ($this.val() === "Customer Request description"){
        // If the value of the select is "Customer Request description"
        // update the textarea which underlays the rich text editor with 
        // the data from our hidden field...
        $("textarea[id*='requestText']")
          .val($("[id$='customerDescription']").val());
        // ...and set the data for the CKEDITOR instance with the same value.
        CKEDITOR.instances[cke_editor_instance]
          .setData($("[id$='customerDescription']").val());
      }else{
        // We've only got 2 so no need to use a switch or 
        // something else more fun.
        $("textarea[id*='requestText']")
          .val($("[id$='caseDescription']").val());
        CKEDITOR.instances[cke_editor_instance]
          .setData($("[id$='caseDescription']").val());
      }
    });
  });
</script>

Yon Matt Lacey is right though, this is something of a hack… it’s not a terrible hack… but it is a hack! There’s no guarantee that this approach will continue to work, but I’ll keep my fingers crossed while I look around for other alternatives.

Posted in Development

Our new office

Back in December we moved offices to the Future Business Centre in the North of Cambridge.

The Future Business Centre has been set up as an “enterprise hub that demonstrates the possibilities for combining good business practice with the business of doing good”.

We have more space, lots of concrete “feature walls” and views of Cambridge all around us.

This slideshow requires JavaScript.

We are always looking for new talent to join our exciting business, if you feel you’ve got what it takes to hit the ground running in a fast paced start up environment then email your CV to careers@arcusglobal.com or visit http://www.arcusglobal.com/about-us/careers/

Posted in Company News

The fallacy of “national” systems

Merging Service / creating a “national” Software / Platform for Local Government.
There have been suggestions that some councils are just too small. You can see the whole discussion here:

I thought I would pull out some of the points I have made in the discussion, as I thought they were worth sharing:
In terms of merging services, it is a naive and dangerous fallacy to think that “national” systems are more efficient or cost less per organisation, per transaction or per individual.

While this seems logical, this has never (or almost never) happened in reality. There is a reason for it: once all demand is aggregated, there are only one or two suppliers on the planet that can do it, so they just give a very high price. The contract is very long term, so why would they ever improve? (What would be their motivation?), and most importantly, the system would be so complex and cumbersome that daily errors cost millions would simply be marginal and unnoticed.

So the price would probably go from 500k per council to several million.
The idea also fails intellectually: instead of standardising services and processes (so they can be delivered in the same way by many interchangeable, competing providers), you standardise the supplier, to deliver fragmented uneven and complex service. And if you think that this supplier would standardise services for you – why would they ever do this? They make more money without it, you cannot get rid of them, and even if you did, there is no one else in the world able to deliver the service.

This was a terrible mistake to get providers to deliver national programmes before services and processes are made standard. And if they ever become standard, the market will always be a better alternative as competition drives ruthless efficiency and improvement…
But isn’t the Private Sector doing this? Isn’t Santander running the same banking software in all of its branches?
Well, this is very different: for one, scale is the issue. Santander will not be anywhere near the size of Local Government, Nor each branch the size of each council (branches are 20 – 50 people max, sometime less)
They are also working for ONE company, so they are all on the same employment contracts, with the same policy, same pension etc…

Santander’s customers (the public) expect and receive the SAME service across the UK, and their banking needs are similar to one another. They also offer (even the largest private sector organisations) only tens of services (even less in one division of a corporate)

A District Council of 130 people may offer up to 1500 individual services. They have no choice – they have to offer them as legislation demands it.

This matters because it is then possible for private sector to solve it with one process across at least one country. This can never happen in Local Government. The needs of the public are very different. Rural Cumbria, vs central Manchester, vs Kensington and Chelsea will have completely different demographic, spend profile, needs and demands / priorities.

But even IF we ignore all of that, and create one process, forced nationally (being an average, everyone would have it (and it would be hated by everyone, as no one would get what they want).

What would then be the point of Local representation democratically, if the process would be the same, with no or limited ability to change it at a local level. This is true even for things like parking or food safety inspections – do you make a premium service that is fast (and charges the person £1 per use), or a cheap one (free) that is slow? etc…
If the democratic process is pointless and hollow with no power, then why participate – so it leads to erosion of democracy in general at the local level.

So it is not possible to make a national process with such complexity.
But doing so, would be solving the wrong problem. The problem is cost and lack of choice, NOT that the service is different from area to area.

As I said above, we should enforce interoperability and standards on systems – that way a market will emerge. Dis aggregate systems (so do the opposite – make the components smaller, NOT bigger). then the market will grow, it will be compliant to standards, and vibrant, forcing the price down.

Look at the internet for example:

HTML is a standard (an example of), and as long as your website is complaint, it will work, whoever the provider is, whatever it runs on. So the prices drop, choice is abundant, and consumers pay less.

Posted in Applications, ICT Strategy

It is officially Christmas at Arcus!

Christmas

Posted in Arcus Updates

Moving a Council to “Infrastructure Free” on a shoestring

One council created quite a storm at the recent Public Sector ICT event by announcing that they will soon be the country’s first, completely cloud based council. Rocco Labellarte, CIO at the Royal Borough of Windsor and Maidenhead, has spent the last 12 months transitioning the council to Public Cloud based infrastructure. This was done at a cost of only £250,000 – about one third of the cost of data centre refresh.

Rocco said: “It really is possible to move to the cloud without having to outlay vast sums of money up-front. Our data storage costs have gone down to a tenth of what they were, from £1.2 million down to £120,000 per year, savings which more than cover the cost of the migration work, even in the first 6 months.

In addition to this, the council spent approximately £10,000 on 900 new mobile devices which look to save the council 60% on previous mobile bills, and allow browser based access to a number of key applications.

Cambridge based SME Arcus Global, kicked off the council’s journey to the cloud back in 2010, when they were asked to develop a five year ICT strategy. The strategy focused on future, scalable public cloud technology and was well ahead of its time, but it laid the groundwork for the council – 2011 / 12 were spent lining up the business case, understanding council data, systems and major risk areas. By 2012/13 most of the technologies underpinning the strategy became available in the UK, via the G-Cloud. RBWM have begun implementation in January 2013, and will complete its journey by March 2014.

Arcus Global have helped RBWM in their transition to cloud provider Amazon Web Services for its core infrastructure. Skills transfer and training of RBWM IT staff, was a key part of the decision to work with Arcus: Rocco explained that it is important to keep in-house expertise: “Not only does it save on costs in the longer term, but it allows us to be more flexible to future ICT demands. Plus, it means that we are in a better place to ensure we are getting the best service possible from our cloud providers.”

RBWM is not stopping here. They are pursuing exciting new projects in areas of application consolidation and enterprise platform development, where their collaboration with Arcus continues. Watch this space!

Posted in Company News