Pages

Tuesday, June 25, 2013

Self-referencing Generic Type Constraint

Here is a simple and nice piece of code where I found the use of a self-referencing generic type constraint useful.



This interface is used to represent a hierarchy. The generic type parameter T is constrained to IHierarchical<T>. Although at first glance it might seem like a recursion, of course, it isn't. It just requires the type argument to be implementing the same interface.

Looking around for other guys that use this pattern, I found this post by Eric Lippert who more or less discourages it. Although I agree with him that this pattern can be confusing if not used properly, I still think it can be quite elegant and expressive if used in suitable context.

I think this hierarchy example is a good fit for the pattern.

Side note on the actual use of this interface:
As I said, I use this to represent a hierarchy. In my code this interface is actually implemented by several classes that participate in several hierarchies. The use of the Parent pointer seems redundant, but in my case it is not as the classes that implement this interface are actually SharePoint content types, so the Children property is actually a multi-lookup column in SharePoint and the Parent pointer is the column at the other end of the relationship. Both values are populated by executing a single Linq2SharePoint query.

Friday, March 8, 2013

Two way data binding on a DependencyProperty

I was creating a WPF UserControl today which (among other things) was exposing a DependencyProperty for use in data binding. The problem I came across was that it didn't bind two-way by default and for a while I couldn't figure out either why nor what was the most slick way to go about it.

So here is the trick in case anyone stumbles upon this post while facing a similar problem. In the metadata for your DependencyProperty, don't forget to set the BindsTwoWayByDefault property to true.

Tuesday, December 29, 2009

Starting RSS Graffiti

A year ago, around January 2009, I was looking for a way to liven up the Fan Page I was maintaining in Facebook for Blues.Gr (the Blues social network I run on Ning). One easy thing to do, was obviously to post to the Facebook Fan Page wall, updates from Blues.Gr so that people can learn about what is going on over at Blues.Gr easily through their daily Facebook activity feeds. Best way to do that of course was to do it automatically by reading the RSS feeds available on Blues.Gr and posting any new entries to the Facebook Fan Page.

So I started looking for a Facebook application that would do just that.

Monday, March 17, 2008

ECDIS software



I am working on two ECDIS projects in parallel. Both have to do with monitoring sea traffic.

As some details of the projects are confidential, I am not currently able to release any details on the projects themselves. But I can talk about my work in those two projects.

Project A:

The goal is to create a system that will help track ships that are involved in, or responsible for different types of events that can happen in the seas. For instance: an oil spill is located and reported somewhere. Authorities need to identify the vessels, suspect for causing the environmental damage.

The project approaches the problem by creating a system that will allow for sea traffic monitoring and then, correlating that information with earth observation (EO) data to produce a list of suspect vessels. Vessels are ranked by certain qualitative, quantitative and spatial criteria to help the user make the final decisions and identify the offending vessel using the hints provided by the system along with his/her experience and best knowledge.

What I am building for this project is a custom ECDIS system that implements all the functionality required for this application.

Sea traffic is monitored using a network of AIS receivers. AIS messages are temporarily recorded in local databases and transmitted in real time to a central database through web services over a VPN.

The ECDIS software I am building is using AIS signals to mark the positions of different vessels at specific points in time. It also provides a mechanism for importing and recoding vector data that come form EO sources (processed satellite imagery). EO data are used to identify event locations and possibly other detected targets in the area of the event during data capture.

The concept of the software developed is described below. I will post some screen-shots or a screen-cast when possible to support the description below:

The application window is divided vertically into two panes. The main pane (on the left) is the map pane where the ENC is displayed. The second narrower pane on the right is used for context sensitive information display. Both panes are tabbed for better information and functionality grouping. Application commands are available through main and context menus and toolbars.

Below the ENC pane there is a set of playback controls much similar to those you see in video players: a Play/Pause button, a timeline you can scroll, a playback speed selector etc.

All data recorded in the system (be them AIS messages, EO data or Events) are tied to a specific point in time and space.

The concept is that while the system's database maintains historical sea traffic data for long periods of time, the user only needs to focus on a specific subset of those data related to a particular event. To enable this approach the application allows the user to select one of the recorded events to focus on. Focusing on an event implicitly means filtering AIS and EO data to a specific point in time and space. More accurately: around a specific point in time and space. This way the system loads only relevant data from the database which makes processing faster and consumes less system resources. Selection of the timeframe and area is made either implicitly and explicitly by the user during the selection of the event under investigation.

So keep in mind two concepts here:
  • the "investigated time-frame" which is essentially a period determined by a staring date/time and a length (duration)
  • and the "investigated area" determined by the central location of the event and a range in nautical miles around it.
All data that fall into the selected time and space are loaded into in memory data sets. But not all those data are concurrently displayed in the ENC pane. The AIS and EO data displayed in the ENC pane at any given time is a subset of the loaded data and is determined by:
  • the "focus time"
  • the length of the "visible time-frame"
  • and the view-port (which is identified by its center coordinates and range)
The "focus time" is equivalent to the playback position in a video player. As the user scrolls the timeline control to the left or right, the focus time changes.

The length of the "visible time-frame" refers to the time span before focus time during which all recorded signals should be visualized. For instance if the visible time-frame is set to 1 hour then the track behind a vessel's "current" position will be displayed for the last hour ("current" being determined by "focus-time").

The "view-port" is nothing more that the visible area of the ENC and is defined by means of panning and zooming the map.

Probably you are already getting the "big picture": A system that will playback what happened around an event (i.e. an oil spill) and allow you to watch it like you would watch a video. What you see is what you would see if you were flying with a plane over the event at the selected time, only mapped in ENC, loaded with useful information and of course: interactive.

During playback, you can move around the map by zooming and panning, change the playback speed and generally interact fully with the application (all functionality remains available).

By pointing your mouse to a vessel's latest signal or track you can see relevant information on the right pane of the application. Information available for each vessel includes all category 5 AIS message fields (ship static and voyage information), all fields of categories 1, 2 & 3 of AIS messages (position reports) and derived information based on algorithms and ranking databases, that help classify the ship and rank the probability of it being the offending vessel.

At the time of writing this project is in its final stages. It has been demonstrated to the customer and given their satisfaction it is pending some further development and optimizations before it is officially presented and delivered.

Project B:

Project B is an entirely independent project from Project A. Nevertheless, it is so relevant in context, that it is being developed in parallel. Actually so far I did not see a need to even branch the first project. Minor behavioral differences are handled very effectively through configuration files.

The goal of this project is to use AIS, VTS radar and EO data to identify certain types of vessels in the context of naval security.

AIS, VTS and EO data are correlated with data fusion algorithms and the results are again ranked by risk level.

My work in this project involves the creation of the visualization console. Data acquisition and fusion is handled by other project parties and the results of their work is just input data for my application.

The main difference from Project A is that this time, the software must be used mainly for near real-time monitoring. Playback is just a useful feature.

This project uses more sophisticated data sets and also includes estimated data. There are also considerable differences in data formats. All these had being handled properly during the design phase of the software and provisions where made so that it can read a wider variety of different data sources.

This project is also approaching its demonstration phase.

Technologies used in both projects include:
Apart from the above technologies these projects required extensive understanding of ECDIS, ENCs, AIS, VTS and EO related literature and of course engineering know-how in both software and earth sciences.

Both systems are being developed using Microsoft Visual Studio 2005 and C#.

XML CV

This one is a personal project. It started a few years ago (somewhere in mid 2003), when I created my previous web site (same address, older technology), which was based on Windows SharePoint Services version 2.0.
Goal of the project was to create a single source CV for multiple platforms and applications.
Requirements:
  • Edit the CV content only once
  • Maintain the CV in two languages (English & Greek)
  • Use everywhere
    • web for online reading,
    • MS Word & Adobe PDF for distributing and printing,
    • whichever else application comes along as need
Obvious solution was: use XML and XSLT.
After browsing around for standards I discovered (back then), the XMLRésuméLibrary Project which defined an XML vocabulary for CVs along with a set of tools for visualizing and printing them.
I didn't like the tools they provided as I wanted to approach the whole thing "the Microsoft way". So I just took the DTD from there.
First thing I tried was to create an InfoPath form from the DTD, but converting the DTD to XSD did not yield a solid reasonable schema. Would be nice to have an InfoPath form for editing my CV but the time needed to be devoted in creating a solid result was not worth the try. So I scratched that effort and decided to edit in straight XML.
Next thing I had to do was create an XSL transformation to visualize my CV on my web site. I wanted to maintain the layout and style of the CV I already had in Word format so I created the XSLT from scratch.
Now I had another problem to solve: I needed two versions of the CV in two different languages. There were three solutions to that problem. There was no provision for multilingualism in the XMLRésuméLibrary DTD so I either had:
  • to alter the DTD,
  • trick it somehow (using the "targets" attribute),
  • or just maintain two different XML sources one for each language.
I opted for the third approach (because actually I did not think of the second one at the time).
Maintaining two XML files was not the actual "problem". The problem now was maintaining a single XSLT as apart from content provided in XML I had to translate the static text of the CV (labels etc.)
To do that I used a separate XML (which I named Resources.XML) with a schema I defined for that purpose. This XML file included all static text in translated versions distinguished by a "language" attribute. The Resources.XML file was included by the XSLT using xsl:include and was referenced wherever needed being passed a parameter that specified the selected language.
So far I had the following files:
  • "My Resume.Greek.XML" containing the Greek version of my CV.
    This file had to be edited every time I needed to update my CV in Greek.
  • "My Resume.English.XML" containing the English version of my CV.
    This file had to be edited every time I needed to update my CV in English.
  • Resources.XML containing the labels used in my CV localized in both languages.
    This was a static file created once and never really had to be altered. Here is a sample part of the file:

  • "My Resume.XSL" containing all the XSL transformation required to convert either language source of my CV to DHTML. This was a static file created once and only had to be altered whenever I needed to improve the style and layout of the output. Here is the rough structure of the XSL file:


  • "My Resume.Greek.XSL" which is a minimal file that just stets a variable indicating the selected language to Greek and includes "My Resume.XSL" to do the actual transformation. This file is static and never needs to be edited either. Here is the content of this XSL file:


  • "My Resume.English.XSL" which is a minimal file that just stets a variable indicating the selected language to English and includes "My Resume.XSL" to do the actual transformation. This file is static and never needs to be edited either. The content of this file is analogous to it's Greek equivalent displayed right above. It just changes the value of the "language" variable to English.
These are the basic elements of my first XML CV solution. In practice I maintained a different XSLT for use in MS Word because the XSLT for the web included DHTML interactivity (JavaScript) and slightly different styling than what looked best for print.

All these were not as easy or straight forward as they seem. Problems I was faced with included:
  • Issues with MS Word integration:
    • CSS needed some tweaking to produce the results I wanted in Word.
    • I also had to have an automatically updating Word document. So I used just a Word field to include the XML and transform it on the fly and I also used a Word macro to to automatically update the field every time someone opened the file. Here is the field code:


  • Issues with SharePoint integration when moving to WSS 3.0:
  • See this relevant post for a clue.
  • Issues with PDF transformation:
    I never really tried to solve this one. I am still creating PDF versions by hand (by saving to PDF through MS Word 2007). I will have to look for an automated solution for this one in the future.
All these pretty much remain under investigation since they need some time and effort which the for the moment are not practically worth for.

So this about sums it up for the first phases of this project. Which brings us to 2008. Many things have changed since 2003 that all this started and even since 2006 when migration to WSS 3.0 caused me to re-investigate some of the projects details.

Today we have new things like: Europass, hResume and microformats, HR-XML specs, Linked-In, Xing and other Web 2.0 stuff. So the project is again being revisited these days on any spare time I can get a hold of for it.

What do I currently do?
  • I am making a new XSL to convert from XMLResume to Europass layout.
    This is being done purely for practical reasons.
  • I am considering the problems of integrating with Europass specs in general.
    This has a lot of implications as the two formats have fundamental differences. HR-XML is considered also along this path.
  • I am about to implement hResume in my existing and new XSL transforms.
  • I am considering the problem of integrating with Linked-In.
There are a lot of thoughts on these issues but I will not make more comments on them until I feel I have something concrete to say about them.

Thursday, February 21, 2008

Can't change access modifiers when inheriting from Generic Types.

Well I might be silly, but I had not run-up to this one yet. Until just now:
You cannot change the accessibility level of a class member by means of hiding when inheriting from a generic type.
Consider this example. A simple console application using two classes: MyList inherits from List and My2ndList inherits from MyList. Generic type List has a public method named Add that is not declared as virtual.
My intention was to completely hide the base implementation of the Add method in my derived class. In other words, let's assume that I want MyList class to not expose an Add method. What one would normally do in this case, would be to hide the method by using the new modifier and changing its access modifier from public to private like I tried to do in line 32 of the code snippet that follows.
Well. Guess what. This does not work if you are inheriting from a Generic Type. Try the code bellow then try to play around with the access modifiers in lines 32 and 47. Although I would normally assume that the code bellow would not even compile, it does!

So why am I posting about this?

Well... I didn't come across any comments on the subject on the Internet for the little I looked around and I thought it was an interesting thing to talk about.

If you know of any links discussing the subject and explaining the internals of the compiler or generics implementation in C#, then by all means please do leave a comment to this post.

Saturday, December 2, 2006

Multiple calls to RegisterOnSubmitStatement and Client-Side Validation

Ok! Here is a new thing I discovered yet again the hard way...

In short: Do not call Page.ClientScript.RegisterOnSubmitStatement after the Page Load event.

(What?!)

Well yes! It's not under all circumstances that you can notice the difference but it's there and it's major!
I do not really wan to describe this, so I 'll take you though it with an example:
Let's say you have an aspx page. The page has two controls in it. For simplicity lets make those controls UserControls. The controls are pretty simple: just a TextBox and a RequiredFieldValidator in each of them.

So there you have it:

Control A (let's call it OnSubmitControlA):
and the code file:

Control B (let's call it OnSubmitControlB):

and the code file:

And finally the page itself:

(the codefile has nothing special in it...)

The page has of course a submit button so that we can submit and test it...

So! What we have here!?
  • A Page,
  • two controls that wan to access client-side code just before the page submits (for no particular reason)
  • and at least a Validator Control that will fail validation at some point. (If we did not have a validator then I would not have a case here!)
Now go render the page an see the result. If you leave either TextBox empty and click on the submit button, you will notice that only the alert from the first control pop's up. The other registered script is never called....

Now go back and make a slight change. Move in both contols' codefile the call to Page.ClientScript.RegisterOnSubmitStatement from OnPreRender to OnLoad, like this:

do the same on the other control:

Done! Go back and render the page! Leave either TextBox empty and click submit... See??? Now both alerts pop up!!!

Why is that???

Well look at the source of the rendered page before and after the change to see what' going on:

Here is the script rendered when the call to RegisterOnSubmitStatement is placed in the OnPreRender event:

And here is the script rendered when the call to RegisterOnSubmitStatement is placed in the OnLoad event:

Got it?

If RegisterOnSubmitStatement is called after OnLoad, then the first time it's called the framework appends the statement that calls ValidatorOnSubmit() and returns false if it fails; (effectively blocking the rest of the script from executing). Subsequent calls of RegisterOnSubmitStatement (after OnLoad) are appended to the script generated by the first call (and get blocked by the effect I just described).

Instead, if all your calls to RegisterOnSubmitStatement come before the end of the OnLoad phase, then all registered scripts are appended to previously generated scripts before the eventual injection of the call to ValidatorOnSubmit().

Hoping for comments on this...