Saturday, 1 December 2012

It's clear in Henley - The first Wallingford Head

The amount of people who on Sunday December 10th 1995 told me the "it's clear in Henley" was unbelievable. As you can probably guess it wasn't clear in Wallingford. More of that later....

You might want to read the first article on Getting Wallingford Head started before reading this one.

The year of 1995 was amazing. Rowing was still quite new to me - I had joined the Men's senior squad at Wallingford and was running and organising it with our head coach Richard Tinkler. We had a huge squad - at one time over forty people. At The Head of the River Race in March Wallingford had three entries coming in 39th, 113th and 238th (although I was disappointed not to make any of the crews). When the regatta season started we had a fairly difficult run in to Henley Royal Regatta in choosing our crews, including our top crew. We were one of the largest clubs at Henley that year with two eights in the Thames Cup, a Wyfold coxless four and a coxed four in Britannia (yes that is the correct spelling). Plus two of the vets qualified in pair in the Goblets. Being so close to Henley, we moved our training there during the week and then back to Wallingford for the weekend (when Henley got busy). It was such as good atmosphere there. The evening of the qualifying races was pure joy (maybe because I wasn't racing!) - especially when the results were announced and we got both crews into the Thames Cup. Unfortunately all our crews were knocked out in the first round with the exception of our coxed four in the Brit - which went on to win the event. Wallingford's first win in fifteen years. And they won in style, coming from behind. Superb.

Throughout the rest of the summer I cannot say my attention was drawn to the head race I had put in the rowing calendar. I kept rowing (including being in a winning eight at Peterborough Summer) and then went straight back into training.

Wallingford Sculls is the one of the first head races of the autumn and again I shadowed Roger Brown who was running the event. At the time I started to think about what boat classes we were offering - all the head races before Christmas were for small boats and fours. There were no events for eights. But at Wallingford we do all a lot of our training in eights - we have such a big squad it was necessary to do so. Also one of our target clubs to enter were Oxford colleges and schools - who all train in eights as well. Oxford Brookes (who had two eights stored at Wallingford) also trained full time in Eights.

So I changed the event to be fours and eights. I really don't like it when some people told me it wasn't possible to run an eights head - the river isn't suitable or we won't get the boats into the trailer park. Looking back I didn't really know if I could run this event, but at the time I was convinced that we could run an eights event and run a good one. I remember rowing outings racing side by side from the bottom lock to the club - intertwining blades a lot of the way (most rowers will have had at least one outing like that).

Then their was the issue of the course. I had thought that the sculling head course of 4 kilometre would be too short. We could extend it slightly (to 3 miles) which would make starting easier. I raced in Wallingford Sculls in a quad, which was actually useful to think about some of the logistics from the crew's point of view. For the sculling head we need the majority of crews well below the start (below a bridge and narrow part of river) with some of the boats able to turn into position easily (if you have raced at Wallingford then you will probably understand that better than I have explained). For Wallingford Head all crews need to be below the bridge first - and the timing and start teams need to do a bit more walking (I did get complaints about that!).

There was still a lot of work to do. I took on so much myself - it was my baby. Publicity was pre-internet - so I printed out lots of flyers and every event we competed I had the rowers who were competing put flyers on cars. On one night I remember typing into Excel all the rowing club addresses from the almanac and then doing a large post run. I left getting permission to almost the last minute - permission from South Oxfordshire District Council to use the car park and the Environment Agency (they were a lot tougher on conditions then -  including making sure that the lock keepers were informed with flyers to hand out to boats coming through - IN DECEMBER!).

I was also the entries secretary. Again pre-internet, so everything on paper (with cheques). I remember the entries arriving in the post and a few hand delivered. From many places - Hereford, Bristol, Stourport, Warwick, London. And from public schools - Radley, Latymer, Abingdon, Shiplake. I look at the entry list now and virtually all the clubs still enter each year.

We did offer pairs events as well - and did so for a few more years after 1995. Although, after a while we decided to drop them as they get in the way when putting in gaps between events. Normally the fastest pair is faster than the slowest girl's fours - so we end up having pairs overtake fours.

In total we got 99 entries - a very good start. It would cost around £350 to host the event (excluding the cost of pots which we could reuse). The Wallingford entries alone would cover that cost. All was looking good.

The day of the race came. The first division for the Sculling Head was midday - but it get's dark earlier in December. So the first division was at 11am. Meaning a very early start to get everyone in place - the rafts were moved at 7am along with people out on the river putting out buoys and signs. Registration started at 7am as well - and I can tell you it is bl***y cold in December.

At that time in the morning we had a problem. You couldn't see to the other side of the river. We had freezing fog. Usually it will burn off - first crews will be boating around 10am, so we could have a few hours. I wasn't worrying about this. Everything was in place - rafts, buoys, catering, safety cover. Crews had arrived - we had got the trailers into the car park without a problem (although I do remember an incident a few years later when I think Upper Thames took a piece off the end of a boat around a tree when leaving!).

But we had this freezing fog. At 9am it wasn't looking good - if anything it was looking heavier. As people arrived they kept telling me that it was clear in Henley - so I shouldn't worry. At least everyone could see what the problem was. I have rowed in freezing fog and it's not normally a big problem. You can see sufficiently in front of yourself to manage. But in a race other crews are going to be so much closer. And this fog was heavy. I wouldn't have rowed in this even if not racing.

Boating at 10am was delayed. At the time we were quite inexperienced about putting in place a contingency plan. The second division wasn't due until 1.30pm, with boating starting at 12.30. But their wasn't a lot of time afterwards to delay this division. However, there was a bit of time between divisions. Quickly thinking about the problem meant that the latest that the last crew could finish would be around 12.15 - 20 minutes of racing meaning latest boating time of 11am. Not much margin - another hour we could probably allow. So we got the message out.

11am came - and went. Next change was to allow crews to decided what they wanted to race in a single division with boating starting at 1pm. We would just about finish before it gets dark. At the time their had been complaints amongst crews who had raced in Bristol a few weeks before finishing on the dark - we didn't want to start getting a reputation for such things.

1pm came - and there was no choice but to cancel.

I had entered a few head races - and I was annoyed when one was cancelled the event wouldn't refund entry fees. I had decided up front that we would guarantee the entry fees if we cancelled the event. And we have done this ever since - including for the regatta (which has a lot more costs). It doesn't cost a lot for a club to host an event like this. It does if we take into account £2500 for prizes - but unless you are really silly and have the year engraved on them, you will get to re-use them for the next event. (Incidentally, we had our prizes engraved with WALLINGFORD HEAD. Winners of recent Wallingford events might notice that the pots say WALLINGFORD ROWING CLUB. Otherwise we could end up with several thousand pounds of prizes stored).

I think everyone who had entered took the cancellation on the chin. It was so disappointing though. I had put in such a large amount of work but their was nothing we could do. All the volunteers had got behind the event - the club was I think was behind it. The volunteers got the rafts back to the club and collected the buoys. We had I think sold a large amount of tea and cake - although not enough to cover the costs.

If I look at the briefing sheet for the volunteers I see that the team leaders are mostly the same people that are my best friends now. And some are still involved in the events.

After all the clear up was done - and I took a whole load of stuff home, it was time for the pub. I have to say that those close friends all gave me a cheer, which was very much appreciated. And - there was still freezing fog the next morning. And a bit of a hangover.

The race in 1995 was a bit of a practice for getting everything in place and knowing what to do. A good learning experience. The event in 1996 attracted 97 entries - mainly the same clubs who had came the year before. I was supposed to race in 1995 (although I can't see where I would have had time) - but in 1996 I was awaiting a hernia operation, so I have never got to race in this event. Entries since 1996 have increased - in 1997 we received 171 entries - and last years event had a self imposed limit of 260. The race in 1997 became a lot more serious with the large number of competitors (we estimate over 1000 in that year). Safety became a much more important concern. Although their were no incidents, we were reliant on club members driving launches or being on the bank. We did have one on-river rescue service. But we would need to take it much more seriously.

For note: of the 18 years since the first event in 1995, there have been six cancellations (1995, 2000, 2006, 2007, 2009 and 2012) and the 2002 race had a reduced entry due to the stream.

Sunday, 4 November 2012

Getting Wallingford Head started

It's that time of year when rowers have just started winter training, with the racing part of the training consisting of time trials – known as Head of the River races. I thought that I'd like a few posts about how Wallingford Head got started.

Winter training normally starts well – you actually look forward to a winter of training after either a good or bad summer of racing with optimism that this will be your year! Racing during the winter consists of Head of the River races – time trials where crews start at 10 to 15 second gaps and race of distances around 3 or 4 miles. The culmination of the winter racing season comes with the Head of the River race on the Tideway in London, where 400 crew race in the Men's' Head and over 300 in the Women's Head over the "Championship Course" (the same course as the Boat Race but from Chiswick to Putney).

At Wallingford we have what I regard is the best piece of river on the Thames. It is the longest stretch (between locks) upstream of Teddington. This meant rowing outings were on average usually 16k from the club to Cleeve lock (or 20k lock to lock).

On the river we were running one Head Race – Wallingford Long Distance Sculls, which started in the early 1970's. It was for sculling boats and run over a 4km course. It had a good healthy entry (although in 1995 it was less that half of what it is now), but with the entry all being small boats it was run to as a "service" to rowing rather than to help provide some fund raising income to the club.

I had joined Wallingford Rowing Club in 1993 competing in a "pub" regatta. I had not long returned from working in Indonesia and entered this event – not winning my first race – so spent the rest of the day having a few beers. But I was fairly hooked on rowing (which became my focus for several years). After the pub regatta I joined a beginner's squad for a 12 week course. At the end of the course in early December a time trial was held .This was my first real race – only around 3k, but lots of fun. From then on I ended up training and racing in a novice squad and having a really good time. And I ended up running the novice squad!

In 1994 Wallingford Rowing Club was attempting to transform itself. It had taken on debt via a debenture scheme to buy new boats and blades and adult squad rowing at the club was becoming very healthy. I was now running the senior men's squad – which was numbering over 30 people with the introduction of professional coaching.

The debenture scheme did indeed transform the club – new boats and blades and lots of people rowing. But the large number of people rowing meant further strains on equipment. But with no more money to pay for any new equipment – we were also paying for a professional coach.

At the end of the year another time trial was held at the club with entries from all the club squads as well as Oxford Brookes (who at time rented two racks at the club) and some Oxford colleges.

After that event I was thinking why we weren't running a proper head race – rather than one for 20 or so crews. I had just helped at my first Wallingford Sculls and was pretty driven at that time. So (on 25th November 1994) I faxed the Amateur Rowing Association (now British Rowing) to ask for the event to be put into the calendar. The contents of the fax were simply:
"We wish to hold a small boats head (Wallingford Small Boats Head) next year on Sunday December 10th, 1995. Could you please add this to the list of regattas/heads to be passed at the ARA council meeting on the 27th. If you require any further information please do not hesitate to contact me at work on the above number, at home on XXXX XXXXXX or by email at XX@XX.co.uk. Alternatively contact Pete Sudbury, the club captain on XXXX XXXXXX."
Amazingly we were in the calendar after the ARA council meeting.

I'm not sure how I got the club to agree to do this – pretty sure it involved a conversation with the club captain at the time (Pete Sudbury) who probably said great idea and go ahead. Nor did I know what I had let myself in for.

You will note that the intention was to run a small boats head – the idea was to accept entries in sweep oars boat (to complement the sculling head) but not eights!

Next – the first Wallingford Head

Sunday, 9 September 2012

C#: IEnumerable

I've been asked a few times by my apprentice groups to explain IEnumerable. Something that in C# we use every day. IEnumerable is an interface, which when implemented supports iteration. There are two interfaces - the non-generic one (for looping over non-generic collections) and the generic one.

Firstly, lets look at the definition of the interfaces.
namespace System.Collections
{
   public interface IEnumerable
   {
      IEnumerator GetEnumerator();
   }
}
namespace System.Collections.Generic
{
   public interface IEnumerable<out T> : IEnumerable
   {
      IEnumerator<T> GetEnumerator();
   }
}
Now if we have a Listand look at the first line of the definition we will see that it implements the interfaces.
public class List<T> : IList<T>, ICollection<T>, IEnumerable<T>, 
                       IList, ICollection, IEnumerable
Let's create an example to play with. We will have a class (called Place) with some fields.
public class Place
{
    public string PlaceName { get; set; }
    public string GaelicName { get; set; }
    public int Population { get; set; }

    public override string ToString()
    {
        return String.Format("Place {0} ({1}) pop {2}",
            PlaceName, GaelicName, Population);
    }
}
And in a console program let us populate an instance of a List of Place (List<Place>).
List<Place> places = new List 
{
    new Place { PlaceName = "Lewis and Harris", GaelicName = "Leòdhas agus na Hearadh", Population = 21031 },
    new Place { PlaceName = "South Uist", GaelicName = "Uibhist a Deas", Population = 1754 },
    new Place { PlaceName = "North Uist", GaelicName = "Uibhist a Tuath", Population = 1254 },
    new Place { PlaceName = "Benbecula", GaelicName = "Beinn nam Fadhla", Population = 1303 },
    new Place { PlaceName = "Barra", GaelicName = "Barraigh", Population = 1174 },
    new Place { PlaceName = "Scalpay", GaelicName = "Sgalpaigh", Population = 291 },
    new Place { PlaceName = "Great Bernera", GaelicName = "Beàrnaraigh Mòr", Population = 252 },
    new Place { PlaceName = "Grimsay", GaelicName = "Griomasaigh", Population = 169 },
    new Place { PlaceName = "Berneray", GaelicName = "Beàrnaraigh", Population = 138 },
    new Place { PlaceName = "Eriskay", GaelicName = "Beàrnaraigh", Population = 143 },
    new Place { PlaceName = "Vatersay", GaelicName = "Bhatarsaigh", Population = 90 },
    new Place { PlaceName = "Baleshare", GaelicName = "Baile Sear", Population = 58 }
};
If we want to output them
foreach (Place place in places)
    Console.WriteLine(place);
And when you run this it will use the ToString() method

So what is exactly happening. Let's look at this code
var enumerator = places.GetEnumerator();

while (a.MoveNext())
{
    Place p = enumerator.Current;
    Console.WriteLine("Place {0}", p);
}
Here we are calling the GetEnumerator() method. Out loop is then checking that we can move to the next element - this MoveNext method will return true if there are (more) elements to process. If there are we can get the current element with the Current method, which we can then print out. When we did our original loop - this is essentially what is happening. The foreach statement in C# will hide this complexity. But foreach will work with classes that implement IEnumerable.

So let's extend our example to add another class (IslandGroup) which we will use to encapsulate details about a group of islands - in this case the list of islands above are the Outer Hebrides. So let's create a class for this, with some properties including a Dictionary containing the islands and one to return the total population. Apologies for the lack of comments - my apprentice groups would crucify me!
public class IslandGroup : IEnumerable<place>
{
    public string IslandGroupName { get; private set; }
    public Dictionary<string,Place> Islands { get; private set; }

    public IslandGroup(string islandGroupName)
    {
        this.IslandGroupName = islandGroupName;
        Islands = new Dictionary<string,Place>();
    }

    public void AddIsland(Place island)
    {
        // Add the island if it isn't in the Islands already
        if (!Islands.ContainsKey(island.PlaceName))
            Islands.Add(island.PlaceName, island);
    }

    public int TotalPopulation
    {
        get
        {
            return Islands.Sum(v => v.Value.Population);
        }
    }
}
We can use this class and populate the dictionary as well as returning the total population of all the islands with something like this
IslandGroup outerHebrides = new IslandGroup("Outer Hebredies");

// Add each island to the class
foreach (var place in places)
    outerHebrides.AddIsland(place);

Console.WriteLine("Population {0}", outerHebrides.TotalPopulation);
Now right click on IEnumerable<Place> on the definition of the class - choose Implement Interface. This will create two methods as below
public IEnumerator<Place> GetEnumerator()
{
    throw new NotImplementedException();
}

System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
    throw new NotImplementedException();
}
Why two methods - well look at the definition of IEnumerable<T> which implements IEnumerable. So you need both. In the end we will make one method (the non-generic GetEnumerator()) call the other.

Have you heard of the yield keyword. The yield keyword is user an iterator method to give control back to the loop. So when you do a foreach loop a method gets called to perform the iteration. In this method we will put in a yield statement. So here is some code
public IEnumerator<Place> GetEnumerator()
{
    foreach (var place in Islands)
        yield return place.Value;
}
Our class encapsulates data for the islands which we store in a Dictionary. In the loop we want to return each place - hence in our loop we are using the .Value property and returning this. We are using yield return <expression>, which every time this gets called the expression will be returned. In our code to execute this we will use the instance of the class
foreach (var item in outerHebrides)
    Console.WriteLine(item);
But are you asking - why don't we just loop through the Dictionary property using something like this
foreach (var item in outerHebrides.Islands)
    Console.WriteLine(item.Value);
It just depends on what data you want to make available and what functionality you want to return. Let's say we want to return (in this case) the islands from lowest population first. Since we have written our own iterator we can do this.
public IEnumerator<Place> GetEnumerator()
{
    var inOrder = from i in Islands.Values
                  orderby i.Population ascending
                  select i;

    foreach (var place in inOrder)
        yield return place;
}
Now when we run this the output will be in the order we determine
Finally, you don't need to implement the interface IEnumerable<T> - you can have methods returning that type. So you could write a method like this
public IEnumerable<Place> GaelicAlphabeticalOrder()
{
    var inOrder = from i in Islands.Values
                  orderby i.GaelicName ascending
                  select i;

    foreach (var place in inOrder)
        yield return place;
}
Which could be executed with
foreach (var place in places)
    outerHebrides.AddIsland(place);
Thus we don't need to implement this at a class level - or if we do, we can provide alternative methods (and these methods could take parameters etc.)

The yield keyword can be used as above, but also as
yield break;
Which will end the iteration. You can also use the yield keyword in static methods, for example
public static IEnumerable<string> ScottishIslandGroups()
{
    yield return "Outer Hebrides";
    yield return "Inner Hebrides";
    yield return "Shetland";
    yield return "Orkney";
    yield return "Islands of the Clyde";
    yield return "Islands of the Forth";
}
Or as a property
public static IEnumerable<string> WelshIslandGroups
{
    get
    {
        yield return "Anglesey";
        yield return "Bristol Channel";
        yield return "Ceredigion";
        yield return "Gower";
        yield return "Gwynedd";
        yield return "Pembrokeshire";
        yield return "St Tudwal's Islands";
        yield return "Vale of Glamorgan";
    }
}
Used as
foreach (string islandGroup in IslandGroup.WelshIslandGroups)
    Console.WriteLine(islandGroup);

Thursday, 9 August 2012

C#: Anonymous types

My previous post on Extension methods talked about how they were implemented to allow LINQ to operate. Another requirement for LINQ is Anonymous types.

Anonymous types are real types - the actual name of the type is not known (at least to you as the developer). The name of the type will be known to the compiler but you will not be able to access it. In an anonymous type you will define properties, which will be read-only and the type of each property will be inferred from the data put into it. Let's look at an example
var book = new { BookName = "Twilight", Author = "Stephenie Meyer", ISBN = "0316160172" };

Console.WriteLine("Name {0}", book.BookName);
Console.WriteLine("Author {0}", book.Author);
Console.WriteLine("ISBN {0}", book.ISBN);

Console.WriteLine(book);

Console.WriteLine(book.GetType());
Here we create a variable called book which is output using the properties (BookName, Author and ISBN). We also output the variable and the type that it is called.

The output of book (using the ToString() method) is useful for debugging. Finally the type name is not useful - but internally this is what the compiler will store it as.

Looking at the code we are using this var keyword. A variable declared with the var keyword is still strongly typed - but the compiler will determine the type. If we have a List and want to loop through it we could use
List<string> inputs = new List<string> { "one", "two", "three", "four", "five" };

foreach (var input in inputs)
    Console.WriteLine(input);
Here the use of the var keyword is allowed - but the type is known. I have begun to see developers using the var keyword all the time - rather than in this case use the string keyword. You know what the type is - why make it difficult for other developers reading your code! Just because you can do something doesn't mean you have to do it.

But in our example to create an instance of our anonymous type we used var. We are forced to use var as we don't know what type we have.

In LINQ we generally don't know what type we are creating. Let's look at the LINQ example to convert the strings to upper case. I am using var here - but what type is being returned?
var outputs = from s in inputs
              select s.ToUpper();

foreach (var output in outputs)
    Console.WriteLine(output);
The output is IEnumerable<string> - this is not an anonymous type. We know that when the LINQ query is executed (when it is first accessed) it is going to return something of this type. But, by convention we use the var keyword.

Another common thing to do is the following
var outputs = (from s in inputs
              select s.ToUpper()).ToList();
This forces immediate execution - but the type is known, it is List<string>. But, again, by convention we use the var keyword.

Here is an example with an anonymous type.
var infos = from s in inputs
            select new { Original = s, Upper = s.ToUpper(), 
                         Lower = s.ToLower(), Length = s.Length };

foreach (var info in infos)
    Console.WriteLine(info);
In this example the output will be an anonymous type - with four properties (Original, Upper, Lower and Length). In this case we are forced to use the var keyword. The output from this looks like

As I said earlier a variable declared with the var keyword is still strongly typed. You will find that the intellisense works.

Sunday, 5 August 2012

C#: Extension methods

Extension methods were introduced into .NET 3.5 and allows you to augment functionality to a class without changing the class. Sounds not quite right though. What you actually do is write functionality that operates on an instance of a class using the public methods/fields/properties of that class and looks like you have added functionality.

We define an extension method as a static method within a static class. The first parameter of the method identifies which class you are augmenting which is prefixed with the this keyword.

Let's create a simple extension method which operates on the long type. This method will return a bool if the number is PanDigital. A PanDigital is a number which amongst it's significant digits the number is used at least once, for example the numbers 1234567890 and 13402456789 are pandigital.

The signature of this method (within out static class) will be
public static bool IsPanDigital(this long number)
The main difference between this method and normal methods is the use of the this keyword before the type of the first parameter. Once this method is written the method could be used as
long number = 1234567890;
bool pd = number.IsPanDigital();
The full method could look like this
public static class PanDigitalExtentionMethod
{
    public static bool IsPanDigital(this long number)
    {
        string strNumber = Convert.ToString(number);

        for (int i = 0; i < 10; i++)
        {
            if (!strNumber.Contains(i.ToString()))
                return false;
        }
        return true;
    }
}
When using this you add any correct references and using statement or put all your extension methods in one namespace (normally in one directory).

The method you write looks as if there is no difference between calling the extension method and any other method that is defined in the class.

Where is this used?

I first (explicitly) came across this when developing a MVC (web) application. Common practice is to create "Html Helpers", so for example with the following extension method
namespace Helpers
{
  public static class HtmlHelpers
  {
    public static string OrdinalSuffix(this HtmlHelper helper, int input)
    {
      if (input == 0)
          return "";
      else
      {
          //can handle negative numbers (-1st, -12th, -21st)
          int last2Digits = Math.Abs(input % 100);
          int lastDigit = last2Digits % 10;

          //the only nonconforming set is numbers ending in 
          //   <...eleventh, ...twelfth, ...thirteenth> 
          return input.ToString() + 
              "thstndrd".Substring((last2Digits > 10 && last2Digits < 14) 
                                            || lastDigit > 3 ? 0 : lastDigit * 2, 2);
       }
    }
  }
}
This method will take an integer (e.g. 2) and return a string with the number followed by it's ordinal suffix (e.g. 2nd). You could then use this from within a View. For example, within Razor you could have code such as
<p>
<%: Html.OrdinalSuffix(history.Position) %>
</p>
In a MVC application you would normally edit the web.config file to add the namespace for the Helpers (under the system.web/pages/namespaces node) - thus making it available in all your pages without you having to add an explicit using clause.

Now, I said MVC was the first time I encountered extension methods - in fact, I said the first time I explicitly encountered them. The .NET development team just didn't decide to add this functionality. Whilst it looks cool to use - I have reservations about using extension methods as we have used them above. If I were looking at some code and I saw a method .IsPanDigital then I would need to make the jump that this was an extension method. Although, if you put the mouse over the call (in Visual Studio) it will tell you that it is an extension method.

So why were extension methods added - to help support LINQ. You may well have used them without knowing about it. Let's take an example where we want to retrieve from a List the strings that are three characters in length. We will work with this list
List<string> inputs = new List<string> { "one", "two", "three", "four", "five" };
Code to do this with LINQ might be
var threeLinq = from r in inputs
                where r.Length == 3
                select r;
foreach (var result in threeLinq)
    Console.WriteLine(result);
Running this will give the outputs "one" and "two". But underneath this code is what is being run
var three = inputs.Where(s => s.Length == 3);
foreach (var result in three)
    Console.WriteLine(result);
The function Where is an extension method (of string). Putting our mouse over the Where will show you.

Another quick example
var ordered = inputs.OrderByDescending(i => i);
foreach (var result in ordered)
    Console.WriteLine(result);

var orderedLinq = from r in inputs
                  orderby r descending
                  select r;
foreach (var result in orderedLinq)
    Console.WriteLine(result);
This example shows using the OrderByDescending extension method - in Linq you use the orderby and descending keywords.

In both the LINQ examples we are using the select keyword. Let's look at these - we will retrieve our output in upper case
var three = inputs.Where(s => s.Length == 3).Select(s => s.ToUpper());
foreach (var result in three)
    Console.WriteLine(result);

Console.WriteLine("---");
var threeLinq = from r in inputs
                where r.Length == 3
                select r.ToUpper();
foreach (var result in threeLinq)
    Console.WriteLine(result);

In LINQ using select r.ToUpper() translates into using the Select extension method.

Sunday, 24 June 2012

The gradual (and hidden) move to the cloud

Well it's started for me - I've started to put my data into the cloud. I have been using the cloud for a while, but only to setup web site hosting using the Amazon cloud. It was more a cost and quick availability to do. But now I've started to move files onto it.

My first steps this week were the move of some of my personal files. Had a bit of a shock this week when one of my notebooks started to play up. Whilst I could boot it up it was very unstable and ultimately I had to restore a backup I had made. (I don't do backups often, but did one a few weeks ago). I have so many memory sticks and external drives (two 1GB and one 400MB) as well as an old Dell server with several disks on them, I couldn't reliably tell you where everything was. Except for my main notebook.

So whilst the notebook was still booting up I all my files onto a disk and then onto Google Drive. (I already use Dropbox for work stuff with the young developers I work with). Then I restored the PC to a two week old image, attached Google Drive and my files were back.

I am however missing some email. The only things I have that weren't on Google Drive were my email and the small number of websites I work on. I did have backups of them, and although I am currently missing one week of my personal email I think I will be able to recover that from those backups.

I'll need to find a location for my websites using the cloud. In the meantime I am seeing what TFSpreview (Team Foundation Server Preview) is like for source control for my websites is like. I know it means my code is being held on some other server.

Next step is to look at options for hosting my email. Currently it is just POP/SMTP and all downloaded to my PC.

But, what would have happened had my notebook crashed. It wouldn't have been the end of the world. I did have backups - maybe a few weeks old. A few weeks ago that backup would have been a few months old.

What are the main issues with the cloud? There are in my mind two main issues. The first are privacy and security. There are private documents there - financial stuff, letters etc, although probably nothing that private and the financial information could probably be ascertained via a credit check. Can I trust suppliers of sites like Dropbox, Google Drive or Microsoft Skydrive?

The second is cost. At the moment, Google Drive is free (up to 5GB) and TFSPreview is free (although it is in a trial mode a free version will be available later). I'm not sure if my options for email via always be free - I'll report when I know that later. At the moment I am not "clouding" stuff like photos, movies, old letters, old code (that I refer to every now and then). But there will come a time I think that I'll be happy to pay for that. After the little shock this week when your one and only location of your latest stuff was playing up it was good to come out of the week with my data secure and backed up.

Final thing I did this weekend was get the Windows 8 Release Preview up and running on my old notebook. Partial reason was that I backed up (to disk) what was on this notebook and re-built this notebook as it wasn't used often, had Vista on it and I wanted to have a play with Windows 8 (I actually quite like it)

There were a few things I also setup cloud based. Firstly, Google Drive downloaded all my latest files and after installing Visual Studio for software development I pulled down some of the websites I was working on from TFSPreview.

Next was the Windows 8 can my Microsoft Live login for the machine. This should mean settings being saved with Microsoft Live and then new machines should have those settings. I also use Google Chrome for most of my browsing and had registered with Google (for things like Google Analytics). At some point I must have registered with Chrome and then when I logged in via Chrome my bookmarks came back on this machine (so they are now in sync). My shock was that I can't remember registering my bookmarks via Chrome.

Final thought about the cloud is your "apps" or programs you have bought. Whilst in the future I might be a cloud app user (I have used Google Apps and online Word for editing documents) I am not there yet. Looking at Windows 8 I think it will be a success for those users who want to browse the web, look at mail, instant messaging and show news/sport. All this can be done via Apps - they might never need to use the desktop at all. The cloud will know they use these apps, remember that, remember the settings and have the data in the cloud.

Monday, 18 June 2012

Playing with Mono on the Raspberry PI

Mono is a free and open source (FOSS) project to implement the (Microsoft) .NET framework including the C# compiler and Common Language Runtime. To install you need to get the Raspberry PI up to date with the command

apt-get update apt-get upgrade

I actually run the command twice as it gave some warnings and to run it again. Then to install Mono tun the command

apt-get install mono-complete

which will download and install Mono. Once done (it will take a while) create a directory and create a file (you need to learn how to use an editor like vi or nano) called test.cs with the following contents.

using System;  
namespace hello {
    public class HelloWorld {

       public static void Main()
       {
           Console.WriteLine("Hello, World!");
       }
    }
}

Run the following command to compile the file

mono-csc test.cs

If it works then it creates a file called test.exe. You can run it directly using

./test.exe

Or more correctly you should use

mono test.exe

Sunday, 17 June 2012

Getting your Raspberry PI up and running

Following on from my first thoughts here are the steps I went through to get the Raspberry Pi up and running.

Firstly this isn't my own work - it is taken from a number of other websites to put this together. The Raspberry Pi web site is a good place to start for information. Here you will find a downloads page with three versions of Linux. I originally started with the Debian Squeeze version and then went to the Arch Linux and then went back to the Squeeze version. I intend getting the Arch Linux one up and running as it might give more control as I can build exactly what I need.

1. Download from the Raspberry Pi downloads page the debian6-19-04-2012.img file (or the latest that is available).

2. I use a PC so have used a utility called Win32DiskImager. Download this. It isn't installed on your PC but you need to unzip the file. Once unzipped run the Win32DiskImager.exe. Choose the location of the debian image file and write it to a SD card you have inserted into your PC (make sure you choose the write disk as the tool apparently does not worry about where it writes to).

3. Once written put it in the Raspberry Pi. Attach a keyboard, mouse and monitor. You will also need a CAT 5 cable to connect it to your router. Your router should then provide an IP address to your machine.

4. Connect the power cable. You will need a 5V Micro USB connector (used by some mobile phones). The power light should come on and the display should run through some boot things. If your display is blank I have read reports of some problems with some displays (some with HDMI to DVI) and also some reports of issues with resolution (black borders and the resolution reduced). If the former you need to do some searching on the internet for solutions or use another monitor initially. For the latter I wouldn't worry about it yet.

5. The device will boot up to do some initialisation - then reboot itself. You will be presented with a login screen. The username is pi and the password is raspberry.

6. There is a little bit of work to do in configuration. When you logon to a Linux machine you generally do not use the Superuser (which is user root) - we are logging on as the user pi which means we do not have superuser privileges. However, if we type the command sudo at the start of a command it will run as if we were a superuser. (Lets not worry about how this works at the moment).

7. The first thing I did was enable ssh. This is the Secure Shell which allows us to connect to the RPi from another computer (securely). To do this you need to execute the commands below and reboot

cd /boot
sudo mv boot_enable_ssh.rc boot.rc

8. To login from your PC you should download a utility such as putty. This is a telnet and SSH client. You just need to download the executable and run it (it is not installed so will not be on your start menu). You need to get the IP address of your RPi. The following command      

ip addr

will put out some information about your connections.

pi@pi:~$ ip addr  
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      
    inet 127.0.0.1/8 scope host lo  
2: eth0:  mtu 1488 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether b8:27:eb:f6:67:12 brd ff:ff:ff:ff:ff:ff      
    inet 192.168.11.7/16 brd 192.168.255.255 scope global eth0  

Once you have the IP address (in this case 192.18.11.7) you can connect using Putty. Run the putty executable and enter the IP address. You should now be able to logon (and thus not need a keyboard, mouse and monitor attached to your RPi - although it might be useful to still have the monitor attached to see if any messages get displayed from time to time whilst you are installing)

9. I used a 4GB SD disk - but the image does not use all of it. It is necessary to do some configuration of it. You need to run a utility called fdisk. Firstly if you run the command

sudo df -h

it will display a list of disks available. You will see on this list /dev/mmcblk0p1 or similar. Things begining with /dev are devices and /dev/mmcblk0 is your SD card. It is split into three parts - 1 is the "boot" partition, which is used to startup the device. 2 is the "root" disk and 3 is set as "swap". The Raspberry Pi uses this to put contents of memory when all the memory is used.

To reconfigure the disk run the command

sudo fdisk -uc /dev/mmcblk0

Whilst running type the commands

  • p - which will print the current list of partitions and sizes
  • d - to delete a partition
  • 3 - to specify the swap partition to be deleted
  • d - to delete a partition
  • 2 - to specify the root disk
  • n - to create a new partition
  • p - to create a primary partition
  • 2 - to specify the root disk
  • 157976 - the start of the partition. This may be different from you - see the output from print which displays the original partitions.
  • Enter - press enter for the next question and it will use the rest of the disk
  • w - to write the changes


You now need to reboot the RPi, with

sudo reboot

Once rebooted you then need to run the command

sudo resize2fs /dev/mmcblk0p2

Then reboot again and check the disk (with df -h) to see if it is correct. I will post later how to add back swap.

My first view of the Raspberry PI

A (large) number of years ago I used UNIX machines almost exclusively. I was involved in delivering a software product that was built on a large number of architectures (from some DOS and Windows implementations to VAX VMS to IBM MVS to all flavours of UNIX on IBM, Sun, Silicon Graphics and other hardware). But it had been a while since I used a UNIX box in anger (although in the last 10 years I have installed Linux so that my previous business could have a Bugzilla server with MySql).

The Raspberry PI was released a few months ago and I was fortunate to get on early and order one each from the two distributors (at release you could only order one at a time). So now I have two and it was time to play. I am still working out what to do with these!

They were £25 (although by the time you add delivery it seems to nearer £30) each and have an ARM chip inside, 256MB of memory but apparently a good video chip, with HDMI output, two USB ports, Network connector and power (plus an audio & video and some on-board connectors).

To get the Raspberry PI up and running you need to download from the Raspberry Pi website an implementation of Linux (I have used the Debian Squeeze one initially) and write it to a SD card (not supplied), do some configuration and there you go.

My first "project" with them was to get the Microsoft.NET framework working using the Mono project. I more or less got it up and running from scratch in about 90 minutes (all the time remembering more and more about the fun of typing commands and using the vi editor!). I also got ASP.NET running on the box and a WinForms program.

Not quite right I know but I accessed the machine over the network from my notebook (whilst watching the football). I don't really have a spare monitor to plug this into. So once booted up I enabled "ssh" which allow remote access.

First impressions are it is a working Linux box. But if I wanted to do some Linux stuff I could just have built a Virtual machine for my PC (using VMWare Player or one of the other bits of software). It was good to see something so small run websites using Apache.

It is being targeted to young people and schools. I'm not sure that it will completely work. They are going to provide it boxed (the ones I have are just the board) and with documentation. It really needs this for people to get up to speed, but also for those who want to do a particular task (e.g. install Mono, play video, create video). Fortunately I have an idea of what I am doing - it might have been a while (I still think of UNIX rather than Linux).

My first computer (a VIC 20) gave me the opportunity to code - but things were simpler. You learned from scratch from the manual. I think I only gained success by continual study of the manual, magazines and anything I could get my hands on. It was also new - creating something on the screen on a VIC 20 felt like achievement. Not sure the young today will fell the same way (although my 8 year old god daughter uses Scratch to create programs). I think what is needed is some Raspberry Pi "training courses" to keep within the limits of the RPi but keep these people interested.

There are some additional costs to take into account - you need a SD card for each box (how many of them are going to go missing in schools!), plus a monitor taking a HD input (HDMI or DVI), which means a cable for that. Then a keyword and mouse, which most schools will have (once they recycle old machines - can't see them taking them from working PC's). On top of this schools will need to have some CAT5 sockets (so you can plug them into the internet) - and this means either setting up each SD card with a different static IP address or putting in a DHCP server (your home router does this for you - they aren't really suitable in a school). Finally take into account that these machines are small and might go for a walk (but they are cheap). All of this though is something that some simple training of teachers will solve.

So first one up and running with Debian and Mono with Apache.

I think the second one I might work with a friend getting it attached to his Solar Panel Inverter to read the performance which is then uploaded automatically to a monitoring site. That sounds more like a job for an embedded device. The first one may well become a proxy/VPN server meaning not leaving computers switched on when I am away or a Bugzilla Server or a Source Control server (maybe not).

On an aside, I first thought when I heard about these that if I were still managing a network centre having a rack of these rather than a single 1U server running some network services (such as DNS) might be a cheap way of providing a lot of redundancy. So having some of these at home behind a VPN might be a thought.