Tuesday, July 31, 2012

Photoshop and Illustrator are bosom friends of a Web developers and designers.

The Adobe’s two great tools are Photoshop and Illustrator in my opinion. Especially after I have used these tools for a web develop project. I had needed to have a new logo,some image based rounded shaped buttons with nice colors and fonts. In the past I had a very little knowledge on these two great tools. Being a web developer I thought these tools are mainly for graphic designers. I used to search at the Internet to get and download these elements (like rounded buttons, image logo’s, rounded shaped box e.g.) specially free versions and used those. But these elements were mismatching with the proposed web site’s color matching, font matching, shape of height and widths. And to code (CSS, HTML,...) on these elements to make right shapes takes a lengthy process and some time projects are delayed. Later on I had read some articles and books on web development related topics and they suggested to use some popular tools. I went to the path of their and the result is very much positive.

The conclusion is that web developers need not only have experience on HTML/XHTML, CSS, Client script( JavaScript and helping tool like AJAX,JQuery/Prototype,..) , one of the Server script (Asp .Net/Php/Ruby on rails/Groovy/Perl/...) are not enough, to be a real master they need to be a web designer also. And for that become a good friend of Photoshop, Illustrator like these tools.

Tuesday, May 18, 2010

Creating printable version of a web page


In web based applications, printing is a common requirement.User can print a web page by using the printing utilities of the browser.This may leave users at a surprised state because most of the time the format of the printed web page is messed up.For this reason a "Printer friendly" version of the page is provided. There are many ways to create a printer friendly version using server-side scripting. A separate request for printer friendly page is sent to the server and the server responses with a well formatted printable page. While these ways are completely workable, an elegant and client based approach is the use of only CSS to provide a printable version.

To accomplish this, we need to follow the next simple processes.

1. Create a seperate CSS file that controls the appearence of the page's elements for printable version.Let's call it "printable.css".
2. Include the CSS into the page that will be printed with the following syntax.

<link rel="stylesheet" href="--CSS folder path--" media="print" />

The important part here is the "media" attribute. When the value of this attribute is set to "print", the browser automatically uses this CSS file to format the page while printing.

Let's see a quick example. The layout of our page could be the following:

<html>
<head>
<link rel="stylesheet" href="<CSS folder path>/printable.css" type="text/css" media="print" />
</head>
<body>
<div id="header">
<!- HTML elements for header(logo,banner) -->
</div>

<div id="top_menu">
<!- HTML elements for top menu goes here -->
</div>

<div id="main_content">
<!- HTML elements for page content goes here -->
</div>

<div id="footer">
<!- HTML elements for footer goes here -->
</div>
</body>
</html>

While printing this page, we would probably want to print only the main content and escape the printing of the header, menu or footer. We can accomplish this by using the "printable.css" as follows:

#header,#top_menu, #footer {display:none}

We can add a link or button to anywhere in the page above to provide printing functionality.

<a href="#" onclick="window.print(); return false"> Print </a>

When user clicks on the "Print" link, the browser uses the "printable.css" and hides all portions except the content of the "main_content" element while printing.The result is a formatted printer friendly version of the page.We can fine tune the "printable.css" to take more control over the printable version.

Tuesday, March 9, 2010

Loading seed data with Seed-Fu plugin



Loding seed (default) data for lookup database tables is common to the most applications.Rails provide different mechanisms to accomplish this task. I have found the implementation with a plugin called seed-fu very simple and easy to implement.

Seed-Fu is based around loading ruby scripts located in db/fixtures via a Rake task.We just place the ruby files with the code to load seed data into the db/fixtures folder followed by executing the "db:seed" rake command .The plug in handles the rest.

Looking into the details of one of the seed data file will present a clearer view.We have a model named "Role" and want to populate seed data for this.We place a file named "roles.rb" into the "db\fixture" folder.The file name can be anything but "roles.rb" but we stick to the convention here.The file content as follows:


# # db/fixtures/roles.rb
# # put as many seeds as you like in

Role.seed(:name) do |s|
 s.name ='Admin'
 s.description = 'Administrator of the system'
end

Role.seed(:name) do |s|

 s.name ='Developer'
 s.description = 'Developer of a project'
end

Role.seed(:name) do |s|
 s.name ='Product Owner'
 s.description = 'Product owner of a product'
end


We want to add 3 roles (Admin,Developer and Product owner) so we put 3 "Role.seed" block and populate the "name" and "description" attributes.When we run "db:seed", the roles database table will be populated accordingly.

What will happen if we add some more seed data files later and run "db:seed" rake command? Will the role data will be duplicated? It would normally but we have taken a guard :)

Look at the parameter ":name" after the "Role.seed" method call in the above file.The "Role.seed" method takes a comma separated list of symbols that represent some of the key attribute names of the corresponding model ("Role" here).Before inserting a new row, seed-fu checks whether a row with the key values already exists.If a row exists, no data is inserted.

In the above example we have provided ":name" as the key column/attribute name.Before inserting the first "Role.seed" block,seed-fu conducts a search in the "roles" table for a row with the value "Admin" in "name" field.If a row exist, it skips the block(duplicate row not added) and try to proceed with the same rule for the next blocks.

Once you have set up your fixtures, it’s simple to run them:

rake db:seed

or

rake db:seed RAILS_ENV=<development| test | production>


Make sure to populate the database schema with "db:migrate" or "db:schema:load" before you run the "db:seed" command.

You can download the latest version of seed-fu plugin from GitHub .

Friday, February 5, 2010

Rails cache expiration with sweepers



Sweepers enable expiring or deleting the rails cached items from a centralized place.This exempt us from the pain of writing cache expiration codes scattered throughout the code base by providing an implementation of the observer pattern.Sweeper classes observe one or more models that contributes to the cache data and provide options to expire the related cached items when any changes occur in those models(created,updated or deleted).

Let's understand the process with an example.We have a model named "SystemMessage" that provides messages posted by the system administrator.In the main layout we display the count of total system messages.Since several views are rendered inside this layout, we need to hit the database every time to get the count.To avoid this we decide to cache the count during the first database hit.We cache the count in the following code placed inside a method of the application_controller.

Rails.cache.fetch("systen_message_count") {SystemMessage.count}


The code above hits the database to fetch the count at the very first access and cache it with the key "systen_message_count". Subsequent calls to the method returns the count from the cache until it is expired.

Now we need to expire and reload the count when a new "SystemMessage" is created or an existing one is deleted.Instead of putting the cache expiration code in different places inside controllers and models, we can put it inside a single sweeper class and react to the changes in the "SystemMessage" model(creation or deletion).Here is the code placed inside the "\app\models\cache\system_messages_sweeper.rb" file.

class SystemMessagesSweeper < ActionController::Caching::Sweeper

observe SystemMessage

def after_create(record)
 expire_system_message_cache(record)
end

def after_destroy(record)
 expire_system_message_cache(record)
end

private

def expire_system_message_cache(record)
  Rails.cache.delete("systen_message_count")
end

end

In the code above, we observe the "SystemMessage" model and expire the system_message_count" cache item when a new record is created or deleted.So the count
will be loaded again from the database at the next hit and will be cached subsequently.

The sweeper class needs to be loaded by the rails.So we put the following code into the environment.rb file

config.load_paths += %W( #{RAILS_ROOT}/app/models/cache )


The sweeper has to be added to the controller that will use it. The following code is added to the "SystemMessagesController" class.

cache_sweeper :system_messages_sweeper, :only => [ :create,destroy ]


Rails cache sweepers have played an important role in improving the performance of ScrumPad, which is a popular agile/scrum project management and collaboration tool.

Wednesday, November 18, 2009

Load default lookup data with Rails fixture



The Rails Fixture is a great feature to load sample test data before running unit, functional or integration tests. While it is mostly used in loading test data, it can be a handy tool to load default lookup data for the staging or production database. We can use the “create_fixtures” class method of “Fixture” class.

The following code snippet loads the data from the “countries.yml” file into the “countries” table.

require 'active_record/fixtures'

class CountryData
 def load()
   fixtures_folder = File.join(RAILS_ROOT, 'test', 'fixtures')
   fixtures.create_fixtures(fixtures_folder , " countries " )
 end
end


In this case,the “countries.yml” file must be present in the “<RAILS_ROOT>\test\fixtures” folder.

If we need to load data from all the “.yml” files of a particular folder and don’t want to hard code the names of the individual files as the above code snippet, the following code will do.

require 'active_record/fixtures'

class LoadDefaultData
 def load()
   fixt_folder = File.join(RAILS_ROOT, 'test', 'fixtures')
   fixtures = Dir[File.join( fixt_folder, '*.yml')].map {|f|
   File.basename(f, '.yml') }
   Fixtures.create_fixtures(fixt_folder, fixtures)
 end
end

Sunday, July 26, 2009

Mocks and Stubs in ROR with Mocha



Mock objects and stubs are very useful strategies in Test Driven Development(TDD).They play a great role in increasing the effectiveness and speed of the unit tests.

Life is easier when we write unit tests for methods that returns in-process immediate results.Like adding two integers and returning the result.Things get complicated when the method interacts with applications in another process and involve in a blocking call (connecting to database,calling external web service methods,sending mail through a mail server).

For a variety of reasons, interacting with external resources can make out automated tests a lot harder to write, debug, and understand.It also takes a significant amount of time to run the tests.To extend unit test coverage and overcome this limitation,an alternative best practice is to use mock objects, stubs, or other fake objects in place of the external resources that initiate an inter process communication.

The terms "Mock" and "Stub" are used interchangeably in most cases and thought as same.But there are subtle difference in the usage of these two.Martin Fowler explains the difference in this renowned article.

In my understanding so far,Mocks and Stubs are similar in nature but Mocks does more in the form of "Interaction based testing".

Stub is a class where the method definitions imitates original methods of a class that is involved in calling external resources.The imitated methods returns hard coded known result.So the stub methods return expected outputs against a set of known inputs without involving call to external resources as the original class methods do.

For example,if the original "execute_DML" method involves initiating a connection to database,executing the SQL query against it and returning a true/false, the stub "execute_DML" method just returns true or false depending on the input.

Mocks does the same as Stubs, but it considers the interaction between classes.We use mock objects to record and verify the interaction.In the above example the mock "execute_DML" method not only returns true/false but also verifies that the method calls the appropriate methods to initiate the connection and sending the SQL query to database.In this case, the connection and query execution methods are also mocked to avoid external resource calls.

In mock methods, if we set an expectation to call a method of a particular object only once and in reality the method is called twice, the test result is considered a failure even if the final outcome of the method is satisfied.This is different from stubs.

In ScrumPad project, we use mocks and stubs in the above mentioned scenarios.We use an excellent framework Mocha for this.

Mocha is a library for mocking and stubbing that provides a unified, simple and readable syntax.Let me provide some examples:


We can mock a "class" method.suppose we have a method "add_to_cart".The original definition is the following,

def add_to_cart(product_id)
 product = Product.find(product_id)

 if (product != nil)
   Cart.add(product)
 end

 return true
end


The call to "Product.find" involves a database interaction.We can use mock methods to avoid this.


require 'test/unit'
require 'mocha'

class MiscExampleTest < Test::Unit::TestCase

 def test_mocking_a_class_method
  product = Product.new
  Product.expects(:find).with(1).returns(product)
  assert_equal add_to_cart(1), true
 end
end


In the above case,we have set an expectation in the "add_to_cart" method on "Product" class.Inside the "add_to_cart" method, there must be a call to the "find" method of "Product" class.If the call is omitted,the test will fail even if the final assertion is passed (e.g. the return value of the method "add_to_cart" is true).As we have mocked the "find" class method of the "Product" active record class,there will be no actual database interaction.

We can also set how many times a particular method should be invoked.For example if we set the expectation in the above example like this:

Product.expects(:find).once.with(1).returns(product)

This will raise error if we call the "find" method more than once inside the "add_to_cart" method.

We can set expectation on "instance" methods in stead of "class" methods.For example

Product.any_instance.expects(:find).with(1).returns(product)


Using Stubs instead of mocks is very similar in mocha.For example:


def test_stubbing_an_instance_method_on_all_instances_of_a_class
 recepient_token = "124399A_@@44"
 PaymentService.any_instance.stubs(:create_recipient_token).
 returns(recepient_token)

 transaction_response = TransactionResponse.new()
 transaction_response.stubs(:status).
 returns(SUCCESS)

 PaymentEngine.handle_payments()
end


In the above example, we are using stubs to return predefined results from the imitated methods.

Inside the "handle_payments" method,calls to "create_recipient_token" method of "PaymentService" web service will not invoke the web service in reality but will just return the hard-coded string.

Similarly calls to the "status" method of "TransactionResponse" class will always return "Success".

In both cases we are not setting any interaction based expectations as we did with mocks (the "expects" keyword in mocha).We are just freezing the external resource invocation by returning the known result with "Stubs".

The simplicity of "mocha" and usage of these two extremely powerful tool helping us a lot in increasing the test code coverage of ScrumPad.

Thursday, May 14, 2009

How to embed images into XML files



The title may sound confusing.We all know XML as plain text formats while images as binary files.So how can we embed images inside a XML file?Is it some kind of links to the external image files like the web pages?Not exactly.The image is completely embedded inside the nodes of XML in text format as base64 string.

The Base64 Format:


Base64 converts binary data to plain text using 64 case-sensitive, printable ASCII characters: A-Z, a-z, 0-9, plus sign (+) and forward slash (/), and may be terminated with 0-2 "padding" characters represented by the equal sign (=). For example, the eight-byte binary data in hex "35 71 4d 8e 4c 5f db 42″ converts to Base64 text as "NXFNjkxf20I=".

To accomplish this, we need some kind of encoder application that converts the image into the base64 format and embed the text into XML.To extract the image from the XML,we need another application that reads the base64 string,decodes it into the image again.In .NET Framework class library(FCL), we have convenient components in the "System.Convert" namespace to perform this encoding and decoding.Let's look at an example to understand the process.

The Base64 Encoder:

static void Main()
{
string base64FormattedImage= string.empty;
string imageFilePath = @"C:\Images\nature.jpg";
base64FormattedImage= EncodeToBase64FromImage(imageFilePath);

if (!string.IsNullOrEmpty(base64String))
{
string destinationXMLFile = @"C:\XML\NewsContent.xml";
WriteToXML(base64FormattedImage,destinationXMLFile);
}
}

private string EncodeToBase64FromImage(string imageFilePath)
{
System.IO.FileStream inFile;
byte[] binaryData = null;
string base64String=string.Empty;

try
{
inFile = new System.IO.FileStream(imageFilePath,
System.IO.FileMode.Open,
System.IO.FileAccess.Read);
binaryData = new Byte[inFile.Length];
long bytesRead = inFile.Read(binaryData, 0,
(int)inFile.Length);
inFile.Close();
}
catch (System.Exception exp)
{
// Error creating stream or reading from it.
System.Console.WriteLine("{0}", exp.Message);

}
// Convert the binary input into Base64 Encoded output.
try
{
base64String =
System.Convert.ToBase64String(binaryData,
0,
binaryData.Length);
}
catch (System.ArgumentNullException)
{
System.Console.WriteLine("Binary data array is null.");

}

return base64String;
}

private void WriteToXML(string base64FormattedImage,string destinationXMLFile)
{
XmlDocument doc = new XmlDocument();
doc.Load(destinationXMLFile);
XmlNode nd = doc.SelectSingleNode("/contents/content/Images").FirstChild;
nd.InnerText = base64FormattedImage;
doc.Save(destinationXMLFile);
doc = null;
}


The encoder reads an image file from the disk,converts it into Byte array and calls the "System.Convert.ToBase64String" method to get the base64 string from the Byte array.Then main method passes the base64 string to WriteToXML method to save into an XML file.

The XML file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<contents>
<content>
<ID>1</ID>
<Title>The title</Title>
<Body>
Content goes here..
</Body>
<Images>
<Image ID="1">/9j/4AAQSkZJRgABAgEAYABgAAD/4RBY ......</Image>
</Images>
</content>
</contents>


The Base64 Decoder:

static void Main()
{
string sourceXMLFilePath= @"C:\XML\NewsContent.xml";
string destinationImageFilePath = @"C:\Images\nature.jpg";
DecodeFromBase64ToImage(sourceXMLFilePath, destinationImageFilePath);
}

private void DecodeFromBase64ToImage(string sourceXMLFilePath, string destinationImageFilePath)
{
XmlDocument doc = new XmlDocument();
doc.Load(sourceXMLFilePath);
XmlNode nd = doc.SelectSingleNode("/contents/content/Images").FirstChild;
string base64String = nd.InnerText;

byte[] binaryData;
try
{
binaryData =
System.Convert.FromBase64String(base64String);
}
catch (System.ArgumentNullException)
{
System.Console.WriteLine("Base 64 string is null.");
return;
}
catch (System.FormatException)
{
System.Console.WriteLine("Base 64 string length is not " +
"4 or is not an even multiple of 4.");
return;
}

// Write out the decoded data.
System.IO.FileStream outFile;

try
{
outFile = new System.IO.FileStream(destinationImageFilePath,
System.IO.FileMode.Create,
System.IO.FileAccess.Write);
outFile.Write(binaryData, 0, binaryData.Length);
outFile.Close();
}
catch (System.Exception exp)
{
// Error creating stream or writing to it.
System.Console.WriteLine("{0}", exp.Message);
}
}


The decoder parses the XML document,retrieves the base64 string,converts it into Byte Array by calling the "System.Convert.FromBase64String" method.It then saves the Byte Array as an image file into the disk.

This approach is widely used in the online news media industry.The news media companies exchange articles and images among themselves in XML format.The schema of the XML usually follows the popular NITF(News Industry Text Format) schema or DTD.You can find more about NITF Here.

Monday, May 4, 2009

Creating a Private Ruby Gem Server



While installing a new ruby gem in a machine,we normally use the gem install <gem name> command.This command downloads the latest version of a particular gem.The problem is that ROR gems are not always backward compatible.So if a ROR application was developed using a particular version of a gem (Ex:0.1.1) and the latest version of the gem in the internet (normally rubyforge.org) is 0.1.2,problem may arise if the version 0.1.2 is not backward compatible with version 0.1.1.This may cause a serious trouble when the application is deployed into the web server and the wrong version of the gem is installed.Most probably the application will not function as expected.

A solution to this problem is to deploy the exact version of all the gems those were used during development.One way to implement this is hosting the correct version of the gems into a web server(thus call it a private gem server).I have tested this option in a Apache web server.The steps are as follows:

1.Create a directory for hosting gems on the public files area of the web server.Let's refer is as <Base Directory>.

ssh user@web.server
cd /var/www
mkdir my_gem_server


2.Create a sub directory called "gems" under the <Base Directory>.The name of the sub directory must be "gems" by convention.

ssh user@web.server
cd /var/www/my_gem_server
mkdir gems


3.Copy all the necessary gems from the development machine into the /var/www/my_gem_server/gems sub directory.

4.Generate the gem index:

gem comes with a command generate_index which generates all of the files necessary for serving gems over HTTP.Run this command into the <Base Directory> (/var/www/my_gem_server).

gem generate_index -d /var/www/my_gem_server

Now the private gem server is ready to serve the gems for download and install.

To install the private gems into the target web server where the ROR application will be deployed,log in and run the following command.

gem install <gem name> --source http://<private gem server>/<Base Directory>

The correct version of the gems will be installed and thus the application integrity will be maintained.We need to rerun the gem generate_index command each time we add or remove a gem from the private gem server.

Wednesday, April 15, 2009

Replicate the SQL IN keyword in LINQ to SQL



LINQ To SQL has made the life of .NET developers easier by replicating most of the familiar SQL keywords (SELECT,FROM,WHERE,JOIN..) within the scope of the language.It is easier to construct a LINQ to SQL expression for the developers who are used to in raw SQL syntax against different databases.One of the important SQL keyword is "IN" which is used frequently in the "WHERE" clause of SQL.

For example:

SELECT CustomerID,Name FROM Customers WHERE City IN ('Dhaka','Rajshahi','Chittagong','Khulna')

While constructing this query in LINQ to SQL, the usual solution may come to the mind like this:

C#

string listOfCity = "Dhaka,Rajshahi,Chittagong,Khulna";

MyDataContext context = new MyDataContext ();

var result = from cust in context.Customers
where cust.City in listOfCity
select cust.CustomerID,cust.Name;

return result.ToList();


Actually the above expression does not work and with a little
twist we can make it up and running.The correct code is:

List<string> listOfCity = new List<string>();

listOfCity.Add("Dhaka");
listOfCity.Add("Rajshahi");
listOfCity.Add("Chittagong");
listOfCity.Add("Khulna");

/* We can also use array of string as the following

string[] listOfCity = { "Dhaka","Rajshahi","Chittagong","Khulna" }; */

MyDataContext context = new MyDataContext ();

var result = from cust in context.Customers
where listOfCity.Contains(cust.City)
select cust.CustomerID,cust.Name ;

return result.ToList();


The trick is that we define a generic list of string or an array of string.Then we use the <List>.Contains(<Database column name>) syntax to replicate the SQL "IN" syntax in LINQ To SQL.

Monday, April 6, 2009

Decision making in SSIS depending upon condition



SSIS(SQL Server Integration Service) is not much flexible in the case of decision making based on a condition. In many cases you feel the need of a “Condition” block that can be used in the control flow to decide the alternative paths. For example if you want to copy a file named “AnyFile.txt” into a folder named “Current” if the file does not exist in a folder named “Previous”, otherwise move the file to a folder named “Archive”, there is no direct way to test the “If File Exists in Previous Folder” condition. One possible solution is to use the “Script Task”. The steps are as follows:

1. Declare a package level Boolean variable named “IsFileExistsInPrevious” for example.

2. Create a script task and pass the “IsFileExistsInPrevious” variable as “ReadWriteVariable” into it. Pass other “ReadOnly” variables that contain the name of the file if necessary.

3. Using the components in the .NET “System.IO” namespace, check the existence of the file in the “Previous” folder. If the file exists, set the value of the “IsFileExistsInPrevious” variables to “True”. Otherwise set it to “False”.

VB .NET Script:

Dim srcFilePath As String = ""

If File.Exists(srcFilePath ) Then
Dts.Variables("IsFileExistsInPrevious").Value = True
Else
Dts.Variables("IsFileExistsInPrevious").Value = False

End If
Dts.TaskResult = Dts.Results.Success


4. Declare two “File System” tasks that copies files to the “Current” or “Archive” folders depending on the result of the previous “Script Task”. Drag the “Precedence Constraint” line from the previous “Script Task” to any of the following “File System” tasks.

5. Right click the “Precedence Constraint” line and Select “Edit”. In the “Precedence Constraint Editor” window, Select the value of “Evaluation Operation” to “Expression”. Type the following into the “Expression” box.

@[User::IsFileExistsInPrevious]==false

6. Drag the “Precedence Constraint line to the “File System” task that copies the file to the “Current” folder.

7. Right click the “Precedence Constraint” line again and Select “Edit”. In the “Precedence Constraint Editor” window, Select the value of “Evaluation Operation” to “Expression”. Type the following into the “Expression” box.

@[User::IsFileExistsInPrevious]==true

8. Drag this “Precedence Constraint line to the “File System” task that copies the file to the “Archive” folder.

The key point to note here is the usage of the “Precedence Constraint” editor.The “Expression” option enables to select alternative paths depending on condition (the value of a user variable “IsFileExistsInPrevious” in this case).

Tuesday, March 31, 2009

Setting the FTP password from external configuration sources in a SSIS package



SSIS(SQL Server Integration Service) is a great ETL(Extract Transform Load) tool from Microsoft.It has many useful features for integrating data from different sources.One of the handy feature is the "FTP Task" that enables downloading files from a remote FTP address. The "FTP Task" uses a "FTP Connection Manager" component that stores the FTP credentials(URL,User,Password..). Instead of hard coding the credentials into the component, we can use “Expressions” feature of SSIS to set those properties from configuration files,registry entries or environment variables. So we can set the FTP URL,port,user name from the configuration sources.Unfortunately and surprisingly there is no option in the “Expressions” list to set the FTP password.The “Server Password” option just does not appear in the “Expression” List.One work around of this problem is to use the “Script Task” component and set the “Server Password” property of the “FTP Connection Manager” component. The VB .NET code (SQL 2005 only supports VB .NET) snippet below does the following:

1.Accesses the “FTP Connection Manager” component.
2.Sets the “ServerPassword” property with the value of a package variable(“FTPPassword” here).The package variable can be set from any external configuration sources supported by SSIS(XML file,Environment Variable,Registry..).


Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime

Public Class ScriptMain

Public Sub Main()

Dim FTPConnectionManager As ConnectionManager


FTPConnectionManager = Dts.Connections("FTP_Conn")


FTPConnectionManager.Properties("ServerPassword").SetValue(FTPConnectionManager,
Dts.Variables("FTPPassword").Value)

Dts.TaskResult = Dts.Results.Success

End Sub

End Class


Note:The “Script Task” component must execute before the “FTP Task” component .

Monday, March 16, 2009

Adding a serial number field to the ASP .NET GridView



I was developing a page that displays some data from database in a ASP .NET GridView control through LINQ. The grid required to display a serial number field as the first column of each row.Since i was binding the result of a LINQ call(List<SomeObject>) to the grid, i had to add a new field in the "Entity/Model" Class itself and populate the data somehow before binding it to the Grid. I was not willing to add a "Serial #" field into the class only to display a serial number and was looking for a solution at the View Layer.Then i found this handy solution built into the "GridView" control itself.

<asp:gridview id="myGridView" runat="server" autogeneratecolumns="False">

<columns>
<asp:templatefield headertext="#" width="15">
<itemtemplate>
<%# Container.DataItemIndex + 1 %>
</itemtemplate>
</asp:templatefield>
</columns>
</asp:gridview>

The "Container.DataItemIndex" property does the trick and provides an index for each row item in the GridView. Since the index starts from 0, I only had to increase it by 1 to display a serial number to every row without altering the Data Access Layer. Quite useful isn't it ?

Wednesday, February 18, 2009

Passing security credentials to C# 2.0 Web Service using SOAP Header



In a project ,we had a web service that required the client to pass security credentials (user id and password) while calling some of the web methods. First we thought to add two additional parameters (user id and password) to every web method that required authentication. This strategy looked very repetitive and made the method parameter list longer. After exploring for a while we discovered a nice solution of passing credentials through SOAP header. To implement this we need to perform the following steps:

Web Service:

1. Inherit a class from the “System.Web.Services.Protocols.SoapHeader” class that contains the security credential related fields. Put this class into the same file that contains the web service class.

public class SecurityInfo : SoapHeader
{
public string UserID;
public string Password;
}

2. Add a public field into the web service class of the type “SecurityInfo”.

public SecurityInfo SecurityCredentials = new SecurityInfo();

3. Add a “SoapHeader” attribute to the web method that requires access to security credentials passed by the client application. The parameter name of the “SoapHeader” attribute must be same as the public field declared in the above step (“SecurityCredentials” in this case).


[WebMethod(Description = "A method that requires authentication")]
[SoapHeader("SecurityCredentials")]
public void PerformASensitiveTask()
{
//Call the authentication method here with the security credential.

Authenticate(SecurityCredentials);

}

[WebMethod(Description = "Another method that requires
authentication")]
[SoapHeader("SecurityCredentials")]
public void PerformAnotherSensitiveTask()
{
//Call the authentication method here with the security credential.
Authenticate(SecurityCredentials);

}


private void Authenticate(SecurityInfo credentials)
{
// validation logic goes here. An exception is thrown if not validated.
string userId = credentials.UserID;
string password = credentials.Password;
//.....
}


So the final web service code looks like this:


public class TestService : System.Web.Services.WebService
{
public SecurityInfo SecurityCredentials = new SecurityInfo();
public TestService()
{
}

[WebMethod(Description = "A method that requires authentication")]
[SoapHeader("SecurityCredentials")]
public void PerformASensitiveTask()
{
//Call the authentication method here with the security credential.
Authenticate(SecurityCredentials);
}

[WebMethod(Description = "Another method that requires authentication")]
[SoapHeader("SecurityCredentials")]
public void PerformAnotherSensitiveTask()
{
//Call the authentication method here with the security credential.
Authenticate(SecurityCredentials);
}

private void Authenticate(SecurityInfo credentials)
{
// validation logic goes here. An exception is thrown if not validated.
string userId = credentials.UserID;
string password = credentials.Password;
//.....
}

}

public class SecurityInfo : SoapHeader
{
public string UserID;
public string Password;
}


Web Service Consumer (Console App):

After adding a web reference to the web service, Visual Studio automatically adds a “SecurityInfoValue” property to the consumer-side proxy. The security credential class “SecurityInfo” is also exposed to the client application.We just need to instantiate an object of this class,set the credentials and set it into the “SecurityInfoValue” property of the client side proxy class of the web service.We can call multiple web methods after that with the same instance in the current scope.All the web methods will have access to the security credentials.The client code looks like this:



static void Main(string[] args)
{
TestWebService.TestService srvc = new
ServiceTest.TestWebService.TestService();

TestWebService.SecurityInfo sec = new
TestWebService.SecurityInfo();

sec.UserID = "ID";
sec.Password = "Password";
srvc.SecurityInfoValue = sec;

srvc.PerformASensitiveTask();

srvc.PerformAnotherSensitiveTask();

}


I think using “Soap Header” to pass credentials to multiple web methods of a web service is really an elegant solution.

Friday, February 6, 2009

When to apply OOAD concepts and Design Patterns?



I strongly believe that the concepts like OOAD (Object Oriented Analysis and Design) and Design Patterns contribute to make an application well designed and flexible in a very positive way. But over the years I have seen myself and some of my colleagues struggling to decide where to start first in the software design phase when a new project arrives.

Should we list all the memorized design patterns first and search in the requirement list for the scope to apply those?

Should we think of the basic OO concepts like encapsulation, polymorphism or inheritance (“is-a” or “has-a”) and start designing the classes according to those at the first place ?

Often many developers loose the way on their inability to prioritize these processes and thinks too much of theoretical Analysis and Design without concentrating on solving the problem on hand first. I have also faced this problem which is popularly termed as “Analysis Paralysis” in the Software Analysis and Design Books.

I have found an excellent guideline of prioritizing these processes in the book “Head First Object Oriented Analysis and Design” by O’Reilly publication. The guideline suggests that:

1. We should concentrate on solving the problem on hand first with the best possible algorithm and make sure that the software does exactly what the customer expects from it. In a nutshell, if the customer does not recognize the validity of the software, all hard work on Analysis, design, patterns, best practices, latest and greatest technologies gone in vain my friend.

2. After making the software workable, we should apply basic OO principles to make the software flexible by eliminating the duplicate codes and improving a bad design.

3. Finally we should strive for applying proven design patterns so that the software is easy to extend and reuse.


I just wanted to share these principles and priorities with you and believe that remembering the priorities will be helpful to choose the right path while designing and developing software.

Thursday, January 8, 2009

Tracking the completion of a file upload or copy using .NET FileSystemWatcher component



I had an assignment to process the files uploaded by users into a FTP location.Files used be removed removed after the processing was complete.I had developed a .NET Windows Service in the FTP server machine to watch the FTP directory for the uploaded files.The service used to track the files as soon uploaded and start processing.I had used the .NET "FileSystemWatcher" component to monitor the directory.The problem was that,as soon the file was created in the directory(the upload by user had just started), the service used to start processing it.Normally it takes a while for the completion of file upload specially if the size is larger.So the service was processing incomplete files before the upload is complete.After exploring the property,methods and events of the extremely useful "FileSystemWatcher" component, i found the solution to track when the file is completely uploaded.

To mark the upload completion, a temporary file with a ".flag" extension used to be created that matches exactly the name of the uploaded file.For example if the file name is "Data.zip", the name of the temporary file would be "Data.zip.flag".Any valid and unused extension other than ".flag" could have been used here.The service code is now changed to look for the files with ".flag" extension to understand that the file is fully uploaded or copied.It then extracts the file name from the ".flag" file name and starts the normal processing.

Here is the code:

static void Main(string[] args)
{
FileSystemWatcher watcher = new FileSystemWatcher();
watcher.Path = "C:\\Temp";
watcher.Filter = "*.zip";
watcher.NotifyFilter = (NotifyFilters.LastWrite);
watcher.Changed += new FileSystemEventHandler(watcher_Changed);
watcher.EnableRaisingEvents = true;
Console.ReadLine();
}

static void watcher_Changed(object sender, FileSystemEventArgs e)
{
FileInfo fi = new FileInfo(e.FullPath);

if (fi.Length > 0 && !File.Exists(e.FullPath + ".flag"))
{

File.Create(e.FullPath + ".flag");

}

}

The "path" property of "FileSystemWatcher" component is used to mention the directory to watch for and the "filter" property is used to mention file name or file types.But the crucial property for this particular task is the "NotifyFilter" property that is set to "NotifyFilters.LastWrite" here. This will enable the component to notify when the final byte of the file is written to disk.

The "changed" event of the component is handled to get notified when the file upload is complete.Here we have checked the file size to be sure that it is not zero and also checking whether the temporary file exists.If the file has a size(greater than zero) and temporary file does not exist, we are creating the temporary file.The same event is fired multiple times by .NET Framework.So the checking is being done to create the temporary file only once.