CASoft Blog     CASoft Blog

         Communication Aspects in Software Engineering

8 June 2009

Java and .Net Interoperability

Filed under: Technology — Tags: — admin @ 15:48

JNBridge delivers a message-oriented bridge between Java and .NET objects, undertaking a JMS client on the Java side and .NET remoting on the CLR side.
JNBridgePro provides a proxy system that handles garbage collection gracefully. It provides Java object implementations to .NET Remoting.

JNBridge – http://www.jnbridge.com/

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

5 June 2009

Code Quality – Preventive vs Corrective actions

Filed under: Project Management — Tags: , — admin @ 15:27

When it comes to quality in general and code quality in particular, I very much believe in prevention prior to correction.
When asked about code quality, people tend to offer corrective type solutions. Corrective type solutions are good and necessary, but they are more costly and are worth being minimised by implementing a number of preventive solutions up-stream. Example of preventive actions:

  • Ensure quality of requirements – implement a continous requirements development and recording
  • Ensure quality of architecture and design – implement methodologies for concept and documentation, such as UML and RUP
  • Ensure quality of staff – implement relevant and continuous training programs and recognition programs – manage pressure intelligently
  • Ensure quality of project processes – implement methodologies for project management and software engineering, such as PMBoK, CMMI, RUP and/or a recognised Agile approach.

I have worked for companies which spend a lot of time on the preventive side of things, and other which spend none whatsoever. Experience shows that when time is spent imlementing preventive solutions, the overall time of developing software is no significantly shorter nor longer…
The time normally spent fixing bugs at the end, is instead spent up-front insuring quality. But then dividends pay in the maintenance phase, where things get much easier. One case study demonstrated that 50% less staff were required in the first 5 years of maintaining a software solution re-engineered using proper design and UML approach.
In my point of view, the frequency of total re-engineering needed is also significantly decreased.

As a conclusion, in my experience people implementing Agile approaches often tend to dismiss or minimise the preventive activities in the name of agility, while this is not an Agile requirement and it is undesirable for high quality outcomes.

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

11 May 2009

RUP – Software Component Architecture

Filed under: RUP,UML — Tags: , , , — admin @ 09:11

What is Architecture?

  • In computer science, Architecture is the nature and structure of a system that determines the way it operates.

What Architecture is not:

  • Architecture is not a Framework: While an architecture can take into account the use of a framework, the definition of a framework is not sufficient! A framework is just one component.

The Technical Architecture (or Model of Architecture) is the nature of the system. For instance, it could be:

  • Monolitic
  • Client Server
  • Distributed
  • N-tier

Paradigms can be integrated, such as:

  • Model-View-Controller
  • Software Components
  • Design patterns

A framework can be defined, as part of the technical architecture.

The business architecture defines the structure of the system.
It should outline the different parts of the system, their role and their relationships, as for instance:

What Does Component Architecture Mean for RUP?

  • Components are cohesive groups of code, in source or executable form, with well-defined interfaces and behaviours that provide strong encapsulation of their contents, and are therefore replaceable.
  • Architectures based around components tend to reduce the effective size and complexity of the solution, and so are more robust and resilient.
  • In the examples below, there is the same number of objects, but a different level of complexity:


Definition for Software Component

  • A Software Component is an independent portion of code that is accessed through a defined interface.
  • Software Components may be just imaginary, and can always be defined whatever the technology used!
  • Software Components may also be physical entities, such as a library (e.g. DLL) or a distributed component (e.g. EJB, CORBA, DCOM, Web Services, etc.)… but not necessarily… and it is not because EJB are used that it is a well-thought Software Component Architecture.
  • Physical Software Components may be reused, purchased and/or replaced.

Business components are those that implement the functionality specific to a business.
A “Computation Engine”, which provides specific computation services, is an example of a business component.
Business components are more difficult to reuse than technical components, due to their specific nature.

Technical components are those that implement generic functionality.
An example of a technical component is “Document Printing”.
Technical components can be designed in order to be reused. They can be part of a technical framework.

Why using Software Components?

  • They usually manage to reduce the complexity of a software, by identifying well defined interfaces and independent portions of code.
  • While it is rather inefficient to give Use Cases to develop to Programmers directly, it is much more effective to give components to implement. ==> Would you envisage to outsource the development of a use case? It is easy to outsource the development of a component, or to buy an existing one.
  • This approach eases and improves workload estimations, planning & task assignment.
  • To reuse components is an efficient way because they are already developed and tested.

Components can be developed in order to be reusable, especially the components that provide common solutions to a wide range of common problems.
These reusable components, which may be larger than just collections of utilities or class libraries, form the basis of reuse within an organization, increasing overall software productivity and quality.
Before promoting furious reusability however, ensure that great experience and knowledge has been acquired in the domain of software components.

Because they are well defined, Components can be refactored with less pain than in a not-so-well structured / organised Architecture.
The interface of the component may be mostly unchanged, while the implementation is entirely reviewed.
Even if the interface is changed, finding the impacted code will be easier than finding what is using the multiple classes and methods that compose the component.

In the beginning of Software Development, documentation and system integration were usually poorly undertook, the Architect was often asked to contribute to the development effort, and the Customer would get to perform most of the testing.

RUP recommends to implement “Use Case Packages” as components for requirements, in order to gather requirements by type of functionality.
Indeed this approach will ease the following analysis and design of the application with software components.

Identifying components may be performed this way:

  • Identify different modules, packages, subsystems and layers, e.g. Billing and Subscription modules.
  • Try to find common features, e.g. printing out.
  • Split modules, define interfaces and relationships with other modules.
  • Iterate and go deeper to find as many components as possible. Use top-down approach, from Graphical Interface towards Data, and bottom-up approach at the same time.
  • Then apply patterns such as Model-View-Controller.

The Components specification will provide a brief description of the components and their relationships.
The result is usually described with Composite Structure or Component Diagrams and accompanying text, providing a high-level description for each component.

The Components Analysis will provide a brief definition of each component, and a detailed definition of its interfaces and relationships.
At this stage, Components are black boxes.
The result is usually obtained and described with Sequence Diagrams and/or Collaboration Diagrams. Class Diagrams can be used to describe the interfaces.

The Components Design will provide the detailed description of what is inside each component / inside the box.
The result is usually obtained and described with Sequence Diagrams, Collaboration Diagrams, Class Diagrams, State Diagrams, etc.

The implementation of the components can now easily be performed by a Developer or a Team.
The team will be responsible for maintaining the design of the component and for unit testing the component.
The team might be the supplier and/or the customer for another component/team.

Components can be individually tested and gradually integrated to form the whole system.When performing unit tests of a component, moke components (giving fake answers) can be used to form the test-bench of the component to test.

Note also that Service Oriented Architectures (SOA) are necessarily based on Software Components Architectures. Components are then implemented as services (Web-services, CORBA, RMI, etc.), which provide the benefit of being loosely coupled.

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

9 May 2009

Risk Analysis 101

Filed under: Project Management,RUP — Tags: — admin @ 09:33

In my experience, Risk Analysis is primarily about communication. If the communication going around the project is not open and efficient, no risk analysis approach will save it.
On the other hand, if there is good communication going, a simple risk analysis methodology will do wonders.

The objective of a risk analysis is to identify, quantify and as much as possible mitigate the effects of events that have the potential to prevent a project from reaching its objectives. A risk analysis is not about identifying dysfunctions or people to blame.

The goals of a risk analysis is to:

  • give confidence to the Project Manager that all the contingencies have been considered
  • help working teams to focus on the key issues
  • mitigate the potiential impact of certain risks
  • help to prepare for the unexpected
  • improve the control over the development life-cycle and increase the capability to achieve the project objectives

A common method consists in brainstorming sessions, which allow to establish a list of risks. Each risk has an assignee, who will have the responsibility to help analysing the risk, usually the subject matter expert.
Let’s remind ourselves now one fundamental principle of risk analysis: “No idea is too stupide to be mentionned”. This is why small risks and very important risks will be listed side by side.
Then each risk is the object of a detailled analysis, which will allow to determine the value of a number of attributes. In particular, risks are classified by category.

The following categories may be considered for Software Development projects:

  • Requirements
  • Analysis and Design
  • Coding
  • Test
  • Deployment
  • Training and Documentation
  • Maintenance and Support
  • General

Each risk is also allocated a value for importance. The calculation of the importance is realised by using a Probability-Impact matrix. In the following example the matrix give more importance to the impact over the probability:

 

 

 

 

Probability \ Severity Low Medium High
Low 1 3 5
Medium 2 6 8
High 4 7 9

Still in the context of the calculation of the importance, it is recommended to undertake a ponderation of the severities in relation to cost, quality and planning, in order to take into account the imperatives of the project.

A risk analysis will allow to highlight a number of solutions susceptible to mitigate the risks. Solutions will translate into actions. Some of these actions will need to be undertaken rapidely, in order to prevent the apparition of risks. They are preventive actions. Some will rely on the risk being triggered. They are curative actions.
Each action is allocated a value for importance too, which is calculated with the importance of risks it is mitigating.

Risks may later be managed using Risk Management Plan type document, or project traking type document, such as Status Assessment.

The source of information should also be documented, as context for the risk analysis. For example, list the brainstorming sessions that have happened and the attendees.

When documenting the results of the risk analysis, it is recommended to provide first the catalog of risks as a summary, sorted by importance. Then describe the risks in details by category.
The following attributes are to be documented for each risk:

  • Description – what it is about
  • Indicator – how do we find out
  • Impact (source part, impacted part, probability, impact severity on cost, quality and planning)
  • Possible solutions – refering to actions

The risk repartition may be documented using charts as for example:

  • Severity repartition for planning, quality and/or cost
  • Risks repartition by category (risks number and % importance)
  • Risks control repartition (risks per person, team, group and/or organisation)

Proposed actions are listed with a reference, a description, an undertaking mechanism and associated risks (which are mitigated by the action).

In conclusion, most of the proposed actions should be preventive and therefore undertaken as soon as possible, as a fundamental principle of risks analysis consists in anticipating problems. Indeed risk analysis is not supposed to provide solutions to existing problems, as it is considered to be late.
It is recommended to undertake a process analyse, as per the RUP methodology for example, in order to describe actions in details and to anchor them within a well known methodology.

Finally the risks analysis identifies New risks. The risks management consists in turning risks from New to Open when they are triggered, and turning them from Open to Closed when they have been treated.

Existing problems, at the time of the risks analysis, aren’t identified as risks, since no probability can associated, but they may be managed as open risks during risk management.

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

23 April 2009

Describe Business Processes with UML

Filed under: UML — Tags: , — admin @ 08:32

When describing requirements using UML, any attempt to order use cases or provide sequence information amongst use cases is bad practice, and can only lead to misuse of the UML use case diagrams.

One need to describe the Business Processes for the system under study, as this will provide the order in which things shall happen in much details.
Business Process can be documented in the Overview/Context section of a Product Requirements document.
OMG defines a BPMN notation to describe Business Processes (see http://www.bpmn.org/), which is basically based on a UML 2.0 activity diagram with a number of additional icons / features. This notation however doesn’t add much other than confusion for the non-expert. I am indeed very much in favour of self-explanatory / unambiguous diagrams, as in my experience UML diagrams must be reviewed and approved by subject-matter-experts who usually are not UML experts. So, unless you have a very good reason for using BPMN, just use activity diagrams to describe Business Processes.
BPMN, while based on UML 2.0, is still in need for a consolidation with UML notations.

Otherwise, it is important not to undertake UML without a methodology. In order to be highly successful in documenting a system of any complexity, it is important to follow a formal methodology.
I personally like to use a methodology based on RUP, especially for the documentation of the requirements and the architecture, while agile approach principals can be used for the design and the development…

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

4 April 2009

Calculate Earned Value with TFS

Filed under: Project Management — Tags: — admin @ 20:16

Calculate the Earned Value of a project on a weekly basis, using TFS, MSProject and MSExcel.
In this article we’ll explain how to calculate an Earned Value in days. It can be calculated in $ in a similar way… It is just a little more complicated.
It will work with TFS 2008, Office 2003 and Windows XP.
At the time of writing, it won’t work with Office 2007 and Windows Vista.

1- Define the tasks in TFS

First, the tasks from the work-breakdown-structure are entered in TFS, and assigned to team members as required.
A query will be needed to list all the tasks for your project, including the closed ones, so tasks don’t desappear as they get closed.

2- Use MSProject for the schedule

Though it is possible, it is unlikely that all the tasks of a project will be entered in TFS; typically Project Management or certain tasks performed by Consultants, for example.
Anyway, I like to have a semi-detailed schedule in MSProject, which will cover for all the tasks in the project. The tasks in TFS may be imported in MSProject automatically, using the TFS client tools, but I personally prefer to do this manually.
This is because I might have to prepare a project report as per Friday night on Friday afternoon, and TFS might not always be up-to-date. We have also experienced some problems with this interface.
I do however tend to group TFS tasks into a smaller number of tasks in MSProject, especially when there are hundreds or thousands of tasks, in order to make thinks easier. In this objective, the TFS tasks can be imported automatically in an MSExcel sheet (which works much better), using the query we mentioned earlier.
Then some calculations can be performed, in order to get Remaining time and %Complete values by group of tasks.

3- Export the schedule baseline into MSExcel

For the Earned Value chart, we need the Planned Effort values.
In this objective, we save the schedule baseline using the Resource Usage view in MSProject. Make sure that the values displayed go every 7 days (one week) and then copy and paste them in an MSExcel sheet, and replace the ‘d’ that comes with the numbers by nothing, so they are interpreted as numbers by MSExcel.
Now for each column, we need to add the week number, so it can be reference in the Earned Value table, in order to get the Planned Effort for each week. I personally like to use the format “2009w8” for the 8th week of 2009 for example.
We also need to calculate the sum of each column and the cumul of the sum for each column.

4- Get TFS Tasks updated

The Developers / Team menbers need to update the tasks that are assigned to them in TFS at least once a week. It is usually convenient to get them to do that at the same time they enter their timesheet. They need to update the Remaining Time and the status of the tasks they’ve been working on.

5- Update the schedule

The list of tasks in MSExcel can now be refreshed automatically with the latest values in TFS, and the schedule can be updated (manually), in order to reflect the progress on the project.

6- Update the Actuals from TFS Timesheet

In theory, TFS Timesheet can update the TFS Tasks Completed Time automatically, but at the time of writting we haven’t been able to get this to work properly.

So in order to obtain the actuals, we’ve been able to setup a pivot-table in MSExcel; Setup an external Data-Source pointing to a view, which is refering to the tfstimesheet table in SQL Server on the TFS Server.

We just needed the Work Item Id on the left, the sum of hours from timesheet entries in the middle and nothing on top, in order to get only one column with hours. Then a SUMIF formula allowed us to update the Completed time in the tasks listed in a different MSExcel sheet and to publish these values back in TFS.

7- Update the Earned Value data from MSProject

The Estimate At Completion (or EAC) for the project will be provided by the value in the Work column in MSProject.

The Earned Value is calculated from the %Complete of the project multiplicated by the baseline budget (in days) or Total Budget in the figure below.

The Total Budget is saved every week, because it can change over time, as the project needs to be re-baselined when significant changes happen. In this case if the history of Total Budget was not saved the values of the Earned Value would be impacted.

8- Draw the Earned Value Chart

See also an approach on how to calculate the tolerance for earned value on this post http://www.casoft.com.au/2010/03/earned-value-and-tolerance-using-critical-path.html

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

31 March 2009

Unambiguous and understandable UML

Filed under: UML — Tags: , — admin @ 17:59

The original and fundamental phylosophy of UML is to be unambiguous and understable by most, without requiring an in-depth knowledge of a complex semantique.
This phylosophy had for objective to facilitate the communication in between the different stakeholders of Software Engineering projects and to federate the notations to promote common understanding and shared vision.
It is obvious to me that the original phylosophy is getting diluted progressively amongst the different additions that have been made to the standard over the years.

I do find myself very much in tune with this presentation umlbooch.ppt made by Grady Booch, and which promotes the need to address the increasing complexity of the systems to develop.
My personal vision is to try to keep UML unambiguous and understandable by most, in order to get as many people on board as possible and to address the problematic of the increasing complexity.

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email

29 March 2009

Project Take Over

Filed under: Project Management — Tags: — admin @ 11:43

Taking over a new project, a new role or a new job… Restarting.

The objectives a project take over are (in this order):

  1. Ensure long-term success of the Project Manager
  2. Ensure success of the project

In order to take over a project successfully, the following steps shall be undertaken:

  • Establish authority and leadership with confidence
  • Establish relationships with every individual – learn about each one of them, what are their skills, experience, what type of management get the most of them, what are their motivators, personality and character.
  • Assess Project – take ownership of the project plan and the schedule, and assume resposibility from there on – if plan does seem achievable, perform the foolwing actions in this order: 1. try to reduce scope, 2. try technical simplifications, 3. justify and ask for more resources, and 4. propose to replan the project.
  • Assess and document Risks and Issues – perform a complete risk analysis, in order to confirm that the project objectives are achievable
  • Document Vision and Action Plan – based on well know methodology
  • Establish Project Management Master Mind

Spend a fair amount of your time improving your leadership ability:

  • Provide navigation
  • Establish solid ground
  • Influence
  • Process knowledge
  • Get momentum
  • Focus on victory
  • Establish inner cycle
  • Maintain priorities
Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • Add to favorites
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • email
« Newer PostsOlder Posts »

Powered by WordPress and Writeup.com.au