Thursday, March 31, 2011

Asp Net Development: A Brief History and Advantages

Asp.net is one of the leading web application development framework allowing programmers to build dynamic web sites, web applications and web services. It was developed and launched by Microsoft. Currently there are millions of developers, and significant amount of software development companies opting for asp.net development for their development needs. It was released in January 2002 with version 1.0 with the current version 4.0. It is based on the.net framework and is the descendant to Microsoft's Active Server Pages technology. Furthermore, it is built on the Common Language Runtime (CLR) giving freedom to programmers to write its code using any supported.net language.

Some of the major advantages of Asp.net development could be classified as under:

? As it is a part of Microsoft technologies, so a programmer could be rest assured with the quality of services. In addition, a programmer could also get support from Microsoft.

? Serves as one of the prominent solutions in designing robust and dynamic web sites, web applications and web services.

? Frequently updated by Microsoft to meet the cutting-edge technological requirements and a developer could download these updates as and when required.

? Can be cross-linked with various other technological frameworks such as C, C++, C#, JAVA/AJAX, Flash/Flex, and many others.

? Cross-browser compatibility as the solutions provided could work on multiple browsers such as Internet Explorer, Firefox, Opera, Google Chrome etc.

? Gives freedom to the programmers to write the asp.net code in any.net framework because of its Common Language Runtime advantage.

? SOAP (Simple Object Access Protocol) extension framework, which allows its components to process SOAP messages.

? It is used by millions of users around the globe, proving it as a prominent widget for the development process.

? Support of themes, templates, add-ons and plugins, along with several other advantages.
Apart from these, there are several other advantages, which can be availed by its programmers, in their development process.

Asp.net development could provide thriving solutions in the following categories:

-> Business and corporate website

-> Social/ business/ community networking website

-> Solutions for web-based application

-> Custom CMS (Content Management System)

-> Custom CRM (Customer Relationship Management)

-> And several other solutions, customized to meet individual/ business specific requirements.

Tuesday, March 29, 2011

More Computer Bods Please: The Demand for IT Professionals Is Soaring and Will Continue to Grow

The rate of growth is five times faster than other sectors in the UK and means that over half a million new IT and Telecoms professionals will be needed over the next five years. 110,000 new staff will be needed this year alone in order to keep pace with demand, half of these will move into the sector from other occupations with around 17 per cent drawn directly from education. The jobs created will be high value roles requiring sophisticated business skills alongside a high level of technical competence. Karen Price, Chief Executive of e-Skills UK commented that more young people were required in the industry. "There is a particular need for a new type of development programme that helps young people move easily into IT roles" she said, adding that continued action was needed "to attract talent from all sources, particularly women". According to careers website Prospects women are severely under-represented in the sector with only 18 per cent of the workforce being female. This means that opportunities for women are extremely promising, especially given the existence of various support groups for female IT graduates such as the British Computer Society group 'BSC Women'.

Within the retail sector, demand for IT professionals has risen by 46 per cent over the last six months of 2010 according to specialist recruitment agency ReThink Recruitment. The firm says that those professionals with experience of multi-channel and e-commerce are IT are being deterred from moving away from their current employers by the lure of 15-20 per cent pay increases, particularly project managers and business analysts. ReThink attribute this to a current programme of upgrading and replacement.

Another growth area is the demand for computer security professionals as attacks on systems proliferate and security firms estimate this will increase as the movement towards cloud computing gathers pace. According to research company Frost & Sullivan, around half of the current generation of security experts believe their skills will be in greater demand as cloud computing becomes more popular. Last autumn the company conducted research among 7,547 information security workers employed in security education firms. John Colley from EMEA estimated that over half of those employees are already using cloud systems while Frost & Sullivan's research suggested that many believe exposure of sensitive information due to unauthorised systems or personnel or data leakage to be among the top concerns about cloud. Colley also commented that cyber attacks were one of the top concerns for workers in government and the public sector.

http://www.millhousedata.com/

I am a freelance writer, researcher and administrator with an interest in many contemporary issues across a wide variety of genre's and business sectors. I have a particular interest in energy and the environment which is the main theme of my blog. I have been published in a wide variety of magazines since I started writing in 1997 and I also write regularly for the social media forum of a technical recruitment consultancy based in Milton Keynes. More recently I have started writing articles for the website of a business software company and also work as an online data input administrator for a London-based research company involved in gathering investment information for the food and renewable technology industries. I am a graduate of Bath Spa University with a BA (Hons) in Psychology and English (2/1).

Article Source: http://EzineArticles.com/?expert=Robin_Whitlock

This article has been viewed 11 time(s).
Article Submitted On: January 19, 2011

Monday, March 28, 2011

Answers for Technology Related Problems

In the age of technological advancement, we cannot survive without relying on IT and computers. There is no way we can imagine living in this world without having some information about technology. Individual to collective lifestyles were influenced by technology since the first machine was built.

Introduction of computers, every company whether big or small, has jumped into training their employs to develop computer skills, in orders to get advantage of this latest technology. Also many employed and unemployed professionals starting learning computer related skills to increase their future job opportunities and also to reduce future job loss.

Computer has been around us from almost a half century now. Time has passed since the introduction of terms like software and hardware, which are synchronizing more and more as developments have been made. The computers have evolved in the form of laptop, palmtops and PDAs and smartphones.

Technology is now a part of everything we do, we are living in a world where even second are measured by digital clocks. With so many gadgets and machines spread around us we cannot neglect our increased dependence on them. Information about computers has also expanded by leaps in bounds.

Still seem to be problematic some times when these devices do not work properly, or we have to go for a new hardware or software solution. If you are a computer expert, graphic designer or a web development professional, you are more likely to encounter such problems on a daily basis. This may range from some animation issues to virus attacks, data entry to programming errors and so on.

Not only this, there also appear many related problems, and works piled up on daily basis unless you fix it whether it is a home pc, a car navigator or a company laptop. This can really be very devastating for one's nerves to figure out the nature and solution of the problem at the same time.

However, as much as it seems to solve problems at time to deal with technology, you are also provided with some decent options. One way is asking some friend or colleague, or simply searching on a question answer portal or question answer forum which holds significant information about computers and following the instructions to get it fixed.

This is one of the simplest ways to get to your technology-based problems. Once you join such portals you can also have a lot of information about technology you are unaware of, which would increase your knowledge and expertise in this field.

Sunday, March 27, 2011

Java Technical Interview: Tip and Tricks

A technical interview is the most crucial round out of the whole recruitment process. Although there are many other rounds before the interview like a written test and a group discussion in some cases, the interview is the most feared one. And why shouldn't it be, after all the feeling that the fate of your career lies in the hands of a single individual sitting on the other side of the table can freak anybody out. So in order to have better chances in your interview, it is recommended that you exploit all the resources made available to you in the right fashion.

A technical interview in most of the cases is not aimed to test the advance knowledge of the candidate, it is aimed to test the basics and how clear are the candidate's concepts. The interviewer tries his/her best to corner the candidate with tricky questions expecting a smart answer. What you should keep in mind that the interviewer is not deliberately trying to reject you, he/she just wants to know whether you fit the work profile or not. The whole process is based on one simple fact that a candidate with clear concepts is easy to train and is easy to work with.

The questions can belong to any topic like software engineering, operating systems, database management etc. But you surely would be facing a lot of questions pertaining to the java language. In the recent times the most sought after candidates are those having a good knowledge about the java language. A java technical interview would test your basics and would also test whether you are able to apply your theoretical knowledge in practical scenarios. A good theoretical knowledge is desired but a decent practical knowledge can work wonders for you during your java technical interview.

A java technical interview can be broken down in two sets of questions. Core java questions and advanced java questions. Core java questions are a set of questions related to the language constructs, java keywords, concept of classes, inheritance packages etc. whereas advanced java questions are all about server side programming.

One can find a number of online web portals which help you to improve you java skills by publishing a series of java skill tests. These skill tests are designed by industry and java experts and thus can give a good picture of the technical interview. On such portals you can find numerous sets of advanced and core java questions and answers.

Some of the core java questions look like;
1. Can a class can be declared protected?
2. Can Inner class can be a final class?

Advanced java questions on the other hand can include, java j2ee interview questions, java spring interview questions, JavaScript interview questions etc.

Saturday, March 26, 2011

Importance of C Programming

'C' seems a strange name for a programming language. But this strange sounding language is one of the most popular computer language today because it is structured, high level, machine independent language. It allows software developers to develop programs without worrying about the hardware platforms where they will be implemented. The root of all modern language is ALGOL, introduced in the early 1960s. C was evolved from ALGOL, BCPL and B by Dennis Ritchie at the Bell Laboratories in1972.

C uses many concepts from these languages and added the concepts of data types and other powerful features. Since it was developed along with UNIX operating system, it is strongly associated with UNIX. During 1970's, C had evolved what is known as 'traditional C'. To assure that C language remains standard, in 1983, American National Standards Institute(ANSI) appointed a technical committee to define a standard for C. The committee approved a version of C in december 1989 which is now known as ANSI C. It was then approved by International Standards Organization(ISO) in 1990. This version of C is also referred as C89.

The increasing popularity of C is probably due to its many desirable qualities. It is a robust language whose rich set of built-in functions and operatorscan be used to write any complex program. The C compiler combines the capabilities of an assembly language with features of a high level language and therefore it is well suited for writing both system software and business packages. In many of C compilers available in market are written in C. Programs written in C are efficient and fast. This is due to its variety of data types and powerful operators. C is highly poratble. This means that C programs written for one computer can run on another with a little or no modification. Portability is important if we plan to use a new computer with different operating system. C language is well suited for structured programming, thus requiring the user to think of a problem in terms of function modules or blocks.

A proper collection of these modules would make a complete program. This modular structure makes program debugging, testing and maintenance easier. Another important feature of C is its ability to extend itself. A C program is basically collections of function that are supported by the C library. We can continuously add our own function to C library. With the availability of large number of functions, the programming tasks become simple. I know most of you are not getting good tutorials on C programming. I think you can get the best C programming tutorial and largest collection of source code at http://www.thecwizard.com which is well organized site, especially for the newbies.

Friday, March 25, 2011

Game Designer Job Outlook - Determining Factors

Many individuals are interested in game designer job outlook determining factors. There are many things that people must consider when they are going to enter the electronics field. Proper consideration being given to all factors is very important for the career success of any person that is getting involved in the profession.

The location of the person has a lot to do with their likelihood of getting a job. If an area is not populated very well breaking into the field can be difficult. Heavily populated areas usually have more options available in terms of companies that an individual can work for performing this job.

The amount of education that an individual has will have an impact on their ability to find work. Generally if a person is educated in multiple types of programming they will be able to find work very quickly. The more diversity people have in their skill set the easier it is to find work.

The amount of hours that a person is willing to put in will also influence their likelihood of finding employment. Most individuals involved in this profession must put in a great deal of time before they are earning a good salary. When people show a willingness to decide to put in as much time as they can they will have no trouble finding steady employment.

The title that a person has will influence their job security. People that have a management or lead position usually will not have any worries about being laid off. If someone has seniority within a company they usually do not have to worry about their employment future.

The market is always expanding in this field. The expansion is directly related to the number of consoles that are available for people to play titles on. This is advantageous for any individual choosing to become involved in the profession. Individuals that are well trained can create a career for themselves that will last until retirement.

Most people are happy about the game designer job outlook for the future. People that have chosen to become involved with this industry will most often have job security. These individuals usually are challenged by the tasks that they are requested to perform on a daily basis. Most individuals prefer to have a job that is challenging from an intellectual standpoint. Many times people find it is very easy to use the skills and knowledge they gain in a way that is both fulfilling and profitable.

Wednesday, March 23, 2011

How to Avoid Top iPhone Application Development Mistakes

The launch of iPhone has revolutionized the way world talks about Smartphones. iPhone development has become the benchmark of mobile application development. IPhone developers are churning out newer applications day in and day out to meet growing needs of users. Offshore development centers around the world are busy developing business applications, fun applications, and many others for iPhone users across the world. Most of the businesses outsource their custom development projects to these centers to get better deals.

Competition that surrounds iPhone application development, has led to the development of many unsuccessful apps, which have served neither the businesses nor the developers. Thousands of dollars have been lost behind these unsuccessful development projects most of which have failed due to some common mistakes committed by developers.

Common Mistakes to Avoid in iPhone Development

Don't try to over invent - It is wise to keep the apps simple. This saves time for developers as well as appreciated by the end user who uses it. In most cases development needs can be fulfilled by the iOS SDK that has an expansive library for UI elements. In some rare case, you might need to build a novel UI from scratch. The development process becomes complex and unreliable when every element is invented from scratch. Try getting buttons, sidebars, dialog, and tables using the same UI from SDK rather than going for custom development.

Keep the resolution right - As in a photograph or video, it is very important to get the resolution correct right from the start. Mobile application development lets you have a lot of stunning and eye catching graphics in 2D and 3D animations. iPhone 4 retina display performs best with HD graphical content. Some iPhone developers create larger graphics for good resolution but it slows down the application. So it is important for you to get things right. You can use the recent SDK and the application must be optimized in the latest iOS.

Get the animation right - You need to strike a balance between right kind of animation and usability. The USP of an application is its functionality more than the animation. Many developers commit the mistake of overdoing the animation, which can lead to the application slowdowns and increase in size. On the other hand, poor animation does not bring in the X-factor to applications therefore, making them unpopular.

Try to avoid multitasking - Many developers around the world work on multiple development projects at the same time. This is not advisable as it will be difficult for you to prioritize between the applications. Each application has its own specific requirements and needs a dedicated developer to work on it.

Application development isn't just about technology but needs a lot of creativity where you will also need to keep in mind the functionalities. There are protocols that need to be followed in the creating the best applications for iPhone as mentioned at http://www.evontech.com/iphone-solutions.html.

Tuesday, March 22, 2011

Avoid the Cyber Threat by Using a Safe Programming Language

The Problem

Since the existence of networked, automated information systems, the so-called "Cyber-Threat" has been known to be a major security and business continuity risk. One of the very first worms, the "Morris-Worm", destroyed the e-mail infrastructure of the early internet. The Cyber Threat is not thoroughly understood even by many executives of the software industry, and the situation amongst the software user community is even worse. An Asian nation-state actor recently subverted the Google Mail login system by exploiting a weakness in internet explorer used by Google employees. The same Asian nation state is also suspected to have illegally downloaded the full design blueprints of the largest European jet engine manufacturer.

The Cyber Threat is real and may have grave long-term consequences for those at the "receiving end" of a cyber attack.

The Solution

Unfortunately there is no "silver bullet" solution to this problem. Rather, a holistic solution comprising technology, business processes, user education and security rule enforcement must be employed to properly secure valuable data. The determined support of the CEO, CIO and CFO is clearly required to achieve that. CFOs understand that there exist strategic business risks, which are very difficult to be quantified in monetary terms, but they know that these risks might kill the whole business if left un-addressed. For example, criminal accounting practices of mid-level managers could kill any company, so the CFO will have to ensure the books are regularly audited by an independent authority. The same amount of diligence will be required to secure the confidential data of companies against the Cyber Threat.

This article is about a key aspect of defending against the Cyber Threat - securing software. It is important to note that, again, there is no "silver bullet" to secure a critical software system, but many of today's security flaws (such as "Buffer Overflow Exploits") could be avoided simply by using a Safe Programming Language. This kind of programming languages will make sure that low-level Cyber Attacks are automatically thwarted by the system infrastructure.

What is a "Safe Programming Language"?

As with many subjects in information technology, there is no authoritative definition of the term. Salesmen and consultants bend the term to suit their needs. My definition is simple: A Safe Programming Language (SPL) assures that the program runtime (such as the heap, stack or pointers or machine code) cannot be subverted because of a programming error. An SPL will make sure that a process will immediately terminate upon detecting such a low-level error condition. The Cyber Attacker will not be able to subvert the program runtime and "inject" his own, malicious program code. The programmer can then inspect the "remains" of the terminated process (such as a core file) in a useful manner to analyze and rectify the programming error.

Examples of Safe Programming Languages (in alphabetical order): C#, Cyclone, Java, Sappeur, SPARK Ada, Modula-3, Visual Basic.Net

Examples of Unsafe Programming Languages (in alphabetical order): Ada, Assembly Language, C, C++, Fortran, Modula-2, (Object-)Pascal

What should I do as a Programmer?

Whenever you start a new software development project, select a Safe Programming Language, instead of chosing the "industry standard" of unsafe languages like C or C++. There exist high-performance languages like Cyclone, Modula-3 and Sappeur, which can compete with C/C++ in terms of memory and processing time requirements. Don't think that you are "one of the few programmers who can write bug-free code".

About the Author

Frank Gerlach earned a "Diplom-Ingenieur" in "Informationstechnik" from Berufsakademie Stuttgart. He was worked for more than ten years as a software development engineer on flight reservation, document management, internet banking, financial data distribution and computer-aided design systems. He is the inventor of the Sappeur language.

Resources

This section lists safe programming languages. Mr Gerlach is the inventor of Sappeur and controls the Sappeur website. He does have no financial or other relations to the Cylone and Modula-3 websites or their owners.

Cyclone: http://cyclone.thelanguage.org/ Modula-3: http://www.modula3.org/

Sappeur: http://www.sappeur.eu/

Article Source: http://EzineArticles.com/?expert=Frank_Gerlach

Monday, March 21, 2011

Joomla E-Commerce Extensions

The popularity of Joomla development needs no introduction. Throughout the world, it is one of the most widely used content management systems. One of the reasons that make it popular is Joomla e-commerce extensions that can be easily integrated for clients. Joomla developers can create awesome e-commerce solutions using these extensions to meet your specific needs. Most of these are easily available online and you can purchase them for small amount, which help in continuous development of these applications.

In this day and age of online commerce small and big businesses opt for custom development of their e-commerce platform. This is where Joomla e-commerce extensions come to use. From shopping websites to bidding website, Joomla has a solution for all. Developers sitting in offshore development centers around the world develop high-end e-commerce websites to meet the demands of competitive market.

Popular Joomla e-commerce extensions:

VirtueMart - This is the most popular extension in Joomla. The original component was used in initial years of development and is still being updated to keep up with current requirements. A favorite with developers it is a robust component which allows you to customize your storefront with custom themes. With this, you can handle an unlimited number of categories, products, orders, discounts, shopper groups, and customers.

Easy PayPal - This integrates two powerful entities of e-commerce - Joomla and PayPal. With this, you can easily set the PayPal parameters using default Joomla bot configuration screen.Parameters like email address, dollar amount, currency, item name, item number, button image or a combination of all of above can easily be set in your website using Easy PayPal. This is another favorite with Joomla developers.

Donation Thermometer - For those who plan to gather donations on their site, Donation Thermometer serves the application. It will display a red thermometer that will show donation amount increase with each donation given. This is also one of the unique Joomla e-commerce extensions available.

Jcontent Subscription - One that the developers vouch for, Jcontent Subscription is a component created for subscription-based web sites. This is ideal for websites, which are selling informational products and services. You can create subscriptions for individual users, any category of article or for any section. You can also customize the payment structure using this extension.

SimplyCaddy - This is another powerful extension that lets you create shopping cart easily and quickly, without having to set up a complete shop.This is ideal for websites, which sell simple products or services and are not complete e-commerce platform. SimpleCaddy helps in redirecting the money to your PayPal account.

There are lot that can be done using the most popular e-commerce extensions in Joomla as described in http://www.evontech.com/joomla-development.html.

Sunday, March 20, 2011

Program Testing And Debugging

Testing and debugging refer to the tasks of detecting and removing errors in a program, so that the program produces the desired result on all occasions. Every programmer should be aware of the fact that rarely does a program run perfectly the first time. No matter how thoroughly the design is carried out, and no matter how much care is taken in coding, one can never say that the program would be 100 per cent error-free. It is therefore necessary to make efforts do detect, isolate, and correct any errors that are likely to be present in the program.

Types of Errors

There might be other errors, some obvious and others not so obvious. All the4se errors can be classified under four types, namely, syntax errors, run-time errors, logical errors, and latent errors.

Syntax error: Any violation of rules of the language results in syntax errors. The compiler can detect and isolate such errors. When syntax errors are present, the compilation fails and is terminated after listing the errors and the line numbers in the source program, where the errors have occurred. Remember, in some cases, the line number may not exactly indicate the place of the error. in other cases, on syntax error may result in a long list of errors. Correction of one or two errors at the beginning of the program may eliminate the entire list.

Run-time errors: Errors such as a mismatch of date types or referencing an out-of -range array element go undetected by the compiler. A program with these mistakes will run, but produce erroneous results and therefore, the name run-time errors is given to such errors. Isolating a run-time error is usually a difficult task.

Logical errors: As the name implies, these errors are related to the logic of the program execution. Such actions as taking a wrong path, failure to consider a particular condition, and incorrect order of evaluation, of statements belong to this category. Logical errors do not show up as compile-generated error messages. Rather, they cause incorrect results. These errors are primarily due to a poor understanding of the problem, incorrect translation. of the algorithm into the program.

Latent errors: It is a 'hidden' error that shows up only when a particular set of data is used. For example, consider the following statement

ratio=(x+y)/(p-q);

An error occurs only when 'p' and 'q' are equal. An error of this kind can be detected only by using all possible combination of test data.

Program Testing

Testing is the process of reviewing and executing a program with the intent of detecting errors, which may be belong to any of the four kinds discussed above. We know that while the compiler can detect syntactic and semantic errors, it cannot detect run-time and logical errors that show up during the execution of the program. Testing, therefore, should include necessary steps to detect all possible errors in the program. It is, however, important to remember that it is impractical to find all errors. Testing process may include the following two stages:

1. Human testing

2. Computer-based testing

Human testing: It is an effective error-detection process and is done before the computer-based testing begins. Human resting methods include code inspection by the programmer, code inspection by a test group, and a review by a peer group. The test is carried out statement by statement and is analyzed with respect to a checklist of common programming errors. In addition to finding the errors, the programming style and choice of algorithm are also reviewed.

Computer-based testing: This involves two stages, namely compiler testing and run-time testing. Compiler testing is the simplest of the two and detects yet undiscovered syntax errors. The program executes when the compiler detects no more errors. Should it mean that the program is correct? Will it produce the expected results? The answer is negative. The program may still contain run-time and logic errors.

Run-time errors may produce run-time error messages such as "null pointer assignment" and "stack overflow". When the program is free from all such errors, it produces output which might or might not be correct. Now comes the crucial test, the test for the expected output. The goal is to ensure that the program produces expected results under all conditions of input data.

Test for correct output is done using test data with known results for the purpose of comparison. The most important consideration here is the design or invention of effective test data. A useful criteria for test data is that all the various conditions and path that the processing may take during execution must be tested.

Program testing can be done either at module (function) level or at program level. Module level test, often known as unit test, is conducted on each of the modules to uncover errors within the boundary of the module. Unit testing becomes simple when a module is designed to perform only one function.

Once all modules are unit tested, they should be integrated together to perform the desired function(s). They are likely to be interfacing problems, such as data mismatch between the modules. An integration test is performed to discover errors associated with interfacing.

Program Debugging

Debugging is the process of isolating and correcting and correcting the errors. One simple method of debugging is to place print statements throughout the program to display the values of variables. It displays the dynamics of a program and allows us to examine and compare the information at various points. Once the location of an error is identified and the error corrected, the debugging statements may be removed. We can use the conditional compilation statements, discussed in Chapter 14, to switch on or off the debugging statements.

Another approach is to sue the process of deduction. The location of an error is arrived at using the process of elimination and refinement. This is done using a list of possible causes of the error.

PROGRAM EFFICIENCY

Two critical resources of a computer system are execution time and memory. The efficiency of a program is measured in terms of these two resources. Efficiency can be improved with good design and coding practices.

Saturday, March 19, 2011

DataGrid Control in WPF and LINQ Databinding

When I started working in WPF, in my first sample I tried to locate the control in the toolbox. To my surprise I couldn't find it. I started searching it in the web for some information on that. I found though the DataGrid is not available in WPF under the framework. It is available from CodePlex for public downloading.

.Net Framework 4.0

The above paragraph is not applicable for the.Net framework 4.0, as the.net framework 4.0 already preloaded with the WPF DataGrid control. Only difference I faced is, when you are dragging the control from the toolbox. The CodePlex DataGrid is putting defaults to the AutoGenerateColumns as true but the.Net framework control defaults the AutoGenerateColumns as false.

Setup WPF ToolKit for.Net Framwork 3.5

After downloaded from CodePlex, install it. It will not be available in the tool box right out of the box. You have to choose the installed component to be listed in the toolbox. In order to get DataGrid on Toolbox we should select from the list of components. This is available in the "Choose Tool Box Item" dialog box and under the WPF Components section.

DataGrid DataBinding

Now no matter we are using.net Framework 4.0 or.Net Framework 3.5 we have a DataGrid to start work on. There are not many changes from the way we bind the ListBox in WPF. But here we got a lot of options as we have in GridView. For e.g., we have some standard set of predefined columns for simple usages. For advanced usages we can go for the template controls as we do in GridView.

For making this sample simpler, we go for AutoGenerateColumns = True. This has taken care of all the column creations. So we can just bind the WPF DataGrid with ItemsSource.

Fetch Data using Linq to SQL

It is much easier to use Linq to SQL to fetch the data from the database. Firstly add the Linq to SQL class by choosing them from Add New Item window as shown in the following screen shot

Go to Add New item \ Select Linq to SQL Classes

Name it as NorthwindData.dbml

Then drag the table category from the server explorer window to the NorthwindData.dbml's designer

Now you can simply bind the data as follows

DataGrid1.ItemsSource = (New NorthwindDataDataContext).Categories

Linq and Lambda to work with data

Now it is time to move a bit to know how to use the Linq and Lambda to deal with data. Using the lambda expressions and select method of the list we are trying to transform the category object to anonymous type. Basically we don't want to display code and picture in the DataGrid. So we don't pass it to the control.

Though it is easier to do it with templates, I am trying to explain how to transform objects easily using the lambda expressions. Using lambda expression you can create a new anonymous type using the New With keyword. After specifying the new with whatever you are putting inside {} will become members of the anonymous type. Using linq will be much more readable than the Lambda expressions.

Thursday, March 17, 2011

Possibility of Using Windows and Linux Applications at the Same Time

Comparisons between Windows and Linux applications are evident. In terms of cost, some will suggest Linux instead of Windows. Linux is free to obtain. In fact, no matter how many computers you have, you can distribute and install it for no cost. However, if you are looking for easy to learn applications, some will recommend you to use Windows. It is not difficult to administer and control because Windows applications is user friendly.

If you are currently working on Windows but wanting to make use of Linux applications, Xming is the best way to utilize both applications at the same time. Due to incomplete support of Microsoft Windows to display X applications, Xming is the port to X Window System which can execute X applications to Windows. Xming is free software with easy-to-use installers. It has a licensed of GPLv2 or GNU General Public License version 2 and provides an excellent X-windows terminal emulator to run windows accordingly.

Let me guide you on how to use Windows and Linux applications at the same time:

1.) Through browsing on the net, you can obtain the latest Xming for no charge. You also need to grab Xming-fonts installer for the core X fonts.

2.) You can select the run option to install it from the source or you can save it to your desktop then download it.

3.) During the Xming installation, you will prompted by several questions like installation location, components to install and location for the shortcut. All you have to do is just accept the default and click next.

4.) After that, you will see an additional icon which is XLaunch. Double click it to review the settings. The Xming can be display in full screen, multiple windows or single window with or without a title bar. Choose the setting that you prefer to have.

5.) Click finish if the installation is complete.

6.) To run X applications, make sure that you select the unblock option. This is for your firewall not to block all incoming traffic to your computer.

7.) On your desktop, you will see an X symbol. Right click on Xming symbol if you want to get more information about Xming.

Aside from the standard Microsoft Windows that you are currently using, you can also make use of Linux applications by Xming. Now, you can enjoy the benefits of any X applications, anytime you want to.

I'm currently the owner of two company and a writer for 8 years. Also, I'm a Political Adviser and Image Consultant for 5 years. I'm a former Managing Director in a call center.

Article Source: http://EzineArticles.com/?expert=Setiram_Nisual

Setiram Nisual - EzineArticles Expert Author

Wednesday, March 16, 2011

Google Go Vs Objective C

1.Introduction

"The significance of language for the evolution of culture lies in this, that mankind set up in language a separate world beside the other world, a place it took to be so firmly set that, standing upon it, it could lift the rest of the world off its hinges and make itself master of it. To the extent that man has for long ages believed in the concepts and names of things as in aeternae veritates he has appropriated to himself that pride by which he raised himself above the animal: he really thought that in language he possessed knowledge of the world." Fredrick Nietzsche.

Every computer programmer has few comments on how his programming language of choice is the best. There are common attributes that most programmers want, like an easy to use syntax, better run-time performance, faster compilation and there are more particular functionalities that we need depending on our application. These are the main reasons why there are so many programming languages and a new one being introduced almost daily. Despite the large amount of interest and attention on language design, many modern programming languages don't always offer innovation in language design for example Microsoft and Apple offer only variations of it.

It is not too far in the history when C stepped into the world of computing and became the basis of many other successful programming languages. Most of the members of this family stayed close to their infamous mother and very few managed to break away and distinguish themselves as an individual being. The computing landscape however, has changed considerably since the birth of C. Computers are thousands of times faster utilizing multi-core processors. Internet and web access are widely available and the devices are getting smaller and smaller and mobile computing has been pushed to the mainstream. In this era, we want a language that makes our life better and easier.

According to TIOBE Index, Go and objective C were amongst fastest growing languages specially in 2009 and Go was awarded "Programming Language of the Year" in the very same year. TIOBE obtain its results on a monthly basis by indexing. Indexing is updated using the data obtained by the links to certified programmers, training and software vendors. This data is assembled for TIOBE via the Google, Bing, Yahoo, Wikipedia and YouTube search engines. The results was more predictable for Objective C as it is the language of the IPhone and Mac, and Apple is running strong in the market. However, this result gets more interesting because it has not been long since the technology darling introduced her own programming language called GO.

2. A Little Bit Of History

Go's infamous mother Google has dominated search, e-mail and more. So the introduction of a new programming language is not a shocker! Like many of Google's open source projects, Go began life as a 20 percent time project which Google gives to its staff to experiment, and later evolved into something more serious. Robert Griesemer, Rob Pike and Ken Thompson started its Design and Go was officially announced in November 2009, with implementations released for Linux and Mac OS platforms. Google released Go under a BSD-style license, hoping that the programmer's community will develop and build Go into a viable choice for software development. At the moment, Go is still very young and experimental. Even Google isn't currently using Go in large scale production of applications. While the site that's hosting the code is running a server built with Go as a proof, the primary purpose of the release was to attract developers and build a Go community around it. Despite its uncertain status, Go already supports many of the standard tools you'd expect from a system language.

Objective C In contrast has a longer and broader history. Today it is used primarily on Apple's MAC OS and IPhone. Objective C is the primary language used for Apple's COCOA API. Objective C was created by Brad Cox and Tom Love in the early 80s at their company StepStone. In 1986, Cox published the main description of Objective C in its original form in the book "Object-Oriented Programming, An Evolutionary Approach". Since then, Objective C had been compared feature for feature with other languages, and now it is Steve Jobs' language of choice.

There are many aspects that contribute to the design, and success or failure of a programming language. In this article, I attempt to give a general comparison of these two arguably very important languages of the future.

3. General Comparison

These days, the world is full of programming languages and they are becoming more and more general and all-purpose, but they still have their specializations and characteristics, and each language has its disadvantages and advantages.

Languages can generally be divided into many different categories. The following Table isn't a complete list of all the possible comparable features. Features which were thought to be of somewhat more importance in comparison of the two chosen programming languages were selected and a brief explanation of each one is given.

3.1 Paradigm

Objective-C is an imperative object oriented language, meaning objects can change state. Objective-C also gives you the full power of a true object-oriented language with one syntax addition to the original C and many additional keywords. Naturally, object-oriented programs are built around objects, so in Objective C, objects are the roots of everything. A class is used to produce similar objects, called instances of the class. Classes are used to encapsulate data and methods that belong together. Methods are the operations that Objective-C applies to data and are identified by their message selectors. Objective-C supports polymorphism meaning that several classes can have a method with the same name. Also Single Inheritance is used for code reuse. The closest that can be achieved to obtain multiple inheritance is to create a class with instance variables that are references to other objects. However, the Objective-C philosophy is that programmers do not need multiple inheritance and it discourages it.

In GO things are a little bit different. The Go designers selected a message-passing model to achieve concurrent programming. The language offers two basic constructs Goroutines and Channels to achieve this paradigm. In their design FAQ, Google writes that GO is and isn't an object oriented language! Although Go has types and methods and let us simulate an object-oriented style of programming, there is no type hierarchy. Lack of type hierarchy makes "objects" in Go to be much more lightweight than object in Objective C. Go utilizes an innovative approach to objects and programmers are not required to worry about large object trees. Since go isn't a truly object oriented language, a programmer can solve the problem in whatever way he wants and still enjoys the Object Oriented-like features.

I can't really think of any object oriented language which does not have a hierarchical inheritance mechanism. But for those who do have it, it seems to create a better model for flexibility and reuse. Absence of Inheritance in Go is interesting indeed! As far as I remember, Inheritance has always been taught to me as the punchline of object orientation. The reality is that inheritance is not the only possible mechanism for reuse in object orientation. Composition arguably is a more powerful mechanism for sharing behavior than inheritance.

Object-oriented programming became very popular specially in big companies, because it is suitable approach for the way they develop software and it increases their chances of successful project using teams of mediocre programmers. Object-oriented programming implements a standard for these programmers and prevents individuals from making too much damage. The price is that the resulting code is full of duplication. This is not too high a price for big companies, because their software is going to be full of duplications anyway.

3.2 Syntax

Objective C is an extension of standard ANSI C, existing C programs can be adapted to use the software frameworks without losing any of the work that went into their original development. In Objective C, Programmer gets all the benefits of C when working within Objective C. Programmer can choose to do something in an object-oriented way like defining a new class, or, stick to procedural programming techniques. Objective-C is generally regarded as something like a hybrid between C and Smalltalk. One setback due to the learning curve could be the necessity of having the basic knowledge of programming in C before entering the world of Objective C. C like syntax and Object-oriented programming, often presents a long and difficult learning curve to new programmers and Objective C is also not an exception.

Go is a C family member also, but I think Go manages to break the coding style and somehow makes it different. Compared to Objective C, declarations are backwards. In C, the notion is that a variable is declared like an expression denoting its type like in Basic, which is a nice idea in my opinion.

in Go: var a, b *int;

I find Go closer to a human natural language for example this statement: "Variable a is integer" can be shown as:

var a int;

This is clearer, cleverer and more regular.

Go also permits multiple assignments, which are done in parallel.

i, j = j, i // Swap i and j.

Control statements in Go do not accept parenthesis. While the most common control statement, if, would take the form of "if ( self ){" in Objective C and most of the other OO languages. But in Go, it would have the following form:

if self {

Another difference in Go is that semicolons are not recommended. However, you can terminate any Go statement with a semicolon optionally. In reality, semicolons are for parsers and Google wanted to eliminate them as much as possible. A single statement does not require a semicolon at all which I find rather convenient.

Go is a compiled language similar to a C. There are two Go compilers currently available, one for the x86 platform and another for AMD. Compilation speed of Go is very fast. When I first tried it (without any intended or proper measurement), it was just too damned fast! My experiences with programming languages is limited and rather focused on Object Oriented languages like Java so I had never seen a speed quite like that! One of the fundamental promised goals of Go is to be able to compile things really quickly. According to the official Go demonstration video, Go's performance is within 10 - 20% of C. However, I don't think that's really trust-worthy until we get some performance benchmarks in the near future.

3.3. Exceptions And Generics

Objective C does not have Generic Types unless programmer decides to use C++ templates in his custom collection classes. Objective-C uses dynamic typing, which means that the run-time doesn't care about the type of an objects because all the objects can receive messages. When a programmer adds an object to a built-in collection, they are just treated as if they were type id. Similar to C++, the Objective-C language has an exception-handling syntax.

Go's type system does not support generic types. At least for now, they do not consider them necessary. Generics are convenient but they enforce a high overhead in the type system and run-time, and Go cannot stand that! Like generics, exceptions remain an open issue. Go's approach to Exception while innovative and useful, is most likely difficult for many programmers. Google's codebase is not exception-tolerant and so exceptions are a similar story and they have been left out from the language. Instead, programmer can now use multiple return values from a call to handle errors. Since Go is garbage-collected, absence of exceptions is less of an issue compared with C++, but there are still cases where things like file handles or external resources need to be cleaned up. Many programmers believe that exceptions are absolutely necessary in a modern programming language. However, I like the no exception fact because I find exception handling in most languages ugly. In a language like Go, where it's possible to return multiple values from functions, programmers can do things like return both a result and a status code, and handle errors via status codes.

3.4. Type Systems

Compared to other object oriented languages based on C, Objective C is very dynamic. Nowadays, programmers tend to choose dynamically typed languages such as Objective C. The downfall is that there is less information at compile time. This dynamicity means that we can send a message to an object which is not specified in its interface. The compiler holds detailed information about the objects themselves to use at run-time. Decisions that could otherwise be made at compile time, will be delayed until the program is running. This gives Objective C programs flexibility and power.

Dynamically typed languages have the potential problem of an endless run-time errors which can be uncomfortable and confusing. However Objective-C allows the programmer to optionally identify the class of an object, and in those cases the compiler will apply strong-typing methodology. Objective C makes most of the decisions at run-time. Weakly typed pointers are used frequently for things such as collection classes, where the exact type of the objects in a collection may be unknown. For programmers who are used to a strongly typed languages, the use of weak typing would cause problems so some might give up the flexibility and dynamism. At the same time and while the dynamic dispatch of Objective C makes it slower than a static languages. Many developers believe that the extra flexibility is definitely worth the price and they argue most desktop applications rarely use more than 10% of a modern CPU. I do not agree with the above justification that we only use 10% of the CPU. So what?! It is not a very good trend that the minimalist approaches aimed at efficiency and performance are being replaced by wasteful programs which are largely betting on the power of the hardware, and I personally prefer to work with a more static type checking.

Go also tries to respond to this growing trend of dynamically typed languages and it offers an innovative type system. Go ends up giving a programmer a language with a Pythonish duck typing. Go indeed has an unusual type system: It excludes inheritance and does not spend any time on defining the relationships between types. Instead, programmers can define struct types and then create methods for operating on them. Like Objective C, programmers can also define interfaces. Go is Strongly Typed, but the good thing is that it is not that strong! Programmer do not need to explicitly declare types of variables. Instead, Go implicitly assigns the type to the untyped variable when the value is first assigned to the variable. there is dynamic type information under the covers that programs can use to do interesting things.

3.5. Garbage Collection

It is very important these days to have garbage collection as one of the biggest sources of keeping everything clean and manage memory. In Objective C 2.0 Garbage Collection was introduced. It certainly was a good news for new Iphone and Mac Developers who might be very used to Java. Garbage collection simplified matters but still required programmers to be careful when dealing with the memory management. The Objective-C 2.0 garbage collector is a conservative collector meaning that not only developers have full access to the power of the C language, but also C's ability to integrate with C++ code and libraries is preserved. A programmer can create the bulk of his application using Objective C, letting the garbage collector manage memory and where it's needed, we can escape to the power of C and C++.

In Go, as a concurrent and multi-threaded programming, memory management is very difficult because objects can move between threads, and it becomes very difficult to guarantee that they will be freed safely once we want to get rid of them. Automatic garbage collection eases concurrent coding. Looking at it with the prospect of a person, like myself who is used to a high level, safe, garbage collected languages for many years now, so much of this is just a boring news. but in the other hand, in the low level world of systems programming languages, these types of changes are revolutionary, specially if the desired performance can be achieved. Go's focus is on speed, and in garbage collection lies a performance overhead. Advances in the garbage collection technology however, allowed it to have it with no significant latency and enabled Google to include it in Go.

4. Future And Conclusion

There must be a reason behind the growth of the popularity of these two languages. Maybe the reason could be that when the light of Microsoft is declining; Apple and Google are rapidly taking over each with their own particular ecosystem. Go is a language promoted by Google, giving it an undeniable advantage in terms of popularity, reputation and technical coverage, and Objective C is supported by the might of the Steve Job's empire.

Objective C enjoys the benefits of Cocoa libraries that ships with Mac OS. Mac OS X and the iPhone are the largest implementations of the language by a big margin. Recently, there has been a huge iPhone Applications trend and the potential to make easy money with easy programming projects is quite high. And I believe this very basic human fact will greatly contribute to the future growth of Objective C. Because the more developers use a language and test it in different situations, the better and the stronger a language can become.

Go is indeed an interesting language. With Google's backing and resources, programmers can rest assured that Go will have some sort of a future even if not too shiny! I think the language has potential but it will be some time, not a very short time, before it can attract developers to drop their current platform and choose Go. Go still is a small language. It is experimental and is not recommended for production environments. There is no IDE integration and there are few code examples. Go is incomplete and they put out what they've got and encourage developers' contribution. As an open source project backed by Google, I think Go will soon develop an IDE and an ecosystem, as it seems to be really well received as mentioned before on the TIOBE index. But it's impossible to predict how big the ecosystem will get. If the language is able to generate an ecosystem, then things can go smoothly. I think there is a need to later put in support for the Windows operating system and also integrating it with Eclipse IDE to further expand it among programmers.

Apple and Objective C stress on object oriented programming and all of the documentation for the language is geared toward object-oriented programming. So in this sense there is a huge difference between Objective C and Go. But, like any other human or machine language, Objective C and Go are comparable by certain criteria and I tried to provide a general comparison between the two. However, it might take a very long time for the path of these two languages to actually come across. Go is young and full of uncertainties. This makes the comparison of these two programming languages rather difficult or maybe as my programmer friends say "impossible". Go needs proper evaluation by unbiased referees for some time in order to be more comparable but I'm sure we will hear more about these two languages in the near future.

Esfandiar Amirrahimi is Web Developer and content manager at PerMont Soft Montreal. He completed his undergraduate program in Computer Science/Artificial Intelligence at Glasgow Caledonian University with First Class Honors. He then moved to Montreal to follow a Master program at Concordia University and he is currently working on Web-based Software Development projects at http://www.permontsoft.com/

Article Source: http://EzineArticles.com/?expert=Esfandiar_Amirrahimi

Tuesday, March 15, 2011

The Best Registry Cleaner For Windows XP

If you're experiencing errors or other problems on your computer, then you may have already heard that the "registry" may be to blame. The registry is a central storage facility for Windows, whereby all the important settings & options for your system are kept. Despite this part of your PC playing a very important role in its day-to-day operations, it's continually causing a large number of problems - which are best fixed by using a "registry cleaner" to repair any of the potential issues you may have on your system.

The best registry tool for Windows XP is the tool that's able to fix the large number of errors & problems on your system in the most effective way. Unfortunatly, because XP is several years old now, most people are finding that it's extremely difficult to get a registry cleaner application that works as effectively & reliable as possible. All registry cleaners have been designed to perform the same task - which is to scan through the registry database of your computer & repair any of the possible errors that are inside.

Although there are a lot of registry cleaner programs available on the Internet, we've found that there are only a handful of programs which still work on XP. The best tools are the ones which have been created by professional developers, and are therefore 100% compatible with this system. You should also look for the likes of how many categories of error the program can identify & fix, as well as any extra features the application may have.

Here are some of the specific features you should look for in the best XP registry repair program:

Able to fix the most erors on your PCIs 100% compatible with Windows XPHas additional features, including the likes of a registry defragmenter tool

We've found the best registry cleaner application is a program called Frontline Registry Cleaner, because of the way it's able to fix the largest number of problems on your system. This tool has been designed by a professional software company in the UK, and is 100% compatible with XP, Vista & Windows 7. We have labelled it our recommended program because of the way it's able to fix the most errors in the most effective way - boosting the speed & reliability of your computer as a result.

Monday, March 14, 2011

The History Of XSLT

XSLT or Extensible Stylesheet Language Transformations is a XML based language used to transform XML documents into other XML documents. Instead of the original document being changed, a new document is created, the base of which is the original's content. XML data is converted into HTML or XHTML documents which are then can be displayed as a web page. XSLT is also used to translate XML messages between various XML schemas or make changes in an individual one by editing parts of a message.

The World Wide Web Consortium or W3C as it is better known is instrumental in developing XSLT editor. XSLT was originally a part of W3C's Extensible Stylesheet Language development effort which took place between 1998 and 1999. The project also produced XSL Formatting Objects and the XML Path Language known as XPath. In November 1999 XSLT 1.0 was published by the World Wide Web Consortium. The attempt to create XSLT 1.1 in 2001 was cancelled and after that the XSL group allied with the XQuery workforce to create XPath 2.0. XPath 2.0 had a model where more data storage was possible and the type system was based on XML Schema. XSLT 2.0 was built between 2002 and 2006.

Functional languages influence XSLT and by text pattern matching languages as traditionally has been the case for SNOBOL and awk. The processing model of XSLT involves one or more XML documents which act as source, one or XSLT stylesheet modules, the processor which is also known as the XSLT template processing engine and one or more than one result documents.

The XSLT editor allows one to view and edit a stylesheet code in a tabular format. Most text developers prefer using a tool like Advanced Text View to watch over something like this. While editing XSLT, Advanced Text View provides syntax coloring, numbering of lines, source folding, bookmarking. These help in organizing and navigating through the code quickly and making the whole process much easier. The XSLT editor has built in tools which have proper knowledge about of XSL, XSLT and XHTML. The entry helper windows are cleverly constructed and the drop down menu offers one a variety of choice regarding the elements, attributes and entities which can be inserted with a single click. The code completion tool speeds up typing and ensures the opening and closing of tags are much more balanced affair.

XSLT is available in Internet Explorer since 2001. It was available even earlier than that but the form was not compatible with the specifications drawn out by W3C. XSLT processors are also often seen as standalone products or as components of other web browsers, servers or JAVA or.NET. Various browsers come with either XSLT or earlier versions which can be upgraded to suit the modern web environment.

With the maturing of technology, XSLT performance has improved. Since code generation of languages is common, XSLT editors are helpful in helping people with the editing and helps in proper development of stylesheet.

Sunday, March 13, 2011

Mastering Web Page Design

If you have ever, like myself, taken interest in having your own website to show off your creative genius, aid in a cause, or create an online window for customers to view your products, then you know how difficult it really is. Whatever you create the website for, most people have the idea set into their mind that somehow everything can just be created with simple clicks, drags, pastes, uploads, etc.

When I wanted to start my own website, I learned this is far from the truth. If you want to create a website that is worth your time, it is going to take more time and effort than you may have originally thought.

But we are far from the days of old, and while it is still a large task to create your very own website, there are various tools available to make the process so much easier.

1.) HTML for Beginners

It is only at your advantage to learn HTML. Though it is a pain in the neck, knowing basic HTML is paramount, and being able to manipulate code directly can be more helpful than a Visual Editor often can. Once you learn HTML, it will be a whole lot easier to construct the layout of your web page. And what better to begin with than that...?

2. Master Designing Web Graphics

If you can construct the layout of your web page, you are still far from finished. If you're anything like me, you don't really like using generic templates, and especially not for your own unique website. MDWG can teach you how to create web graphics that will make your website look more like you intended it to be...unique. Some people like to use Adobe Photoshop for this, but considering the hefty price tag (or the guilty conscience if you pirate it), I'm guessing others don't want to go down that road. This program is more aimed at Web Design than Photoshop is anyway.

Combine these two tools and you cannot go wrong.

3.) Global Domains International

I recommend using this hosting service only because a lot of the ".com" website names that are good are already taken! If all the good names you thought of are already taken as a ".com" domain, then check if you can get a ".ws" one! That way, you do not have to settle for a lesser name that you don't really like as much, or purchase that domain (that is, if it is for sale in the first place). GDI also has very good services like mail forwarding and allowing you to have multiple email addresses and a very nifty SiteBuilder. I encourage you to check them out.

I have reviewed these tools, so to learn more, you can visit the blog Pseudo Review: http://pseudoreview.blogspot.com

With these tools, creating a website takes a lot less time, effort, and money than it used to. So go for it!

Saturday, March 12, 2011

Demand For Diverse iPad Apps And Great Opportunities In iPad Application Development

Apple has made a habit of delivering new gadgets that live up to the hype that propaganda and advertising creates. iPad, in a category of its own, maybe somewhere between an iPhone and a laptop, has caught the public imagination and iPads are selling like hot cakes. Its 9.7 inch touch-sensitive screen, virtual keyboard, sharp graphics, number of easy-to-use features, light weight, and instant internet connectivity make it a perfect device for browsing the internet. Numerous applications made for iPhone are available for use in the iPad and various others are being developed, keeping in mind the special opportunities that iPad's 9.7 inch screen presents.

Just as the super-success of iPhone and other Smartphones fuelled the demand for new applications, the popularity of iPad has created a huge market for iPad apps. Many of the applications created for iPhone can be used in iPad, but a strong need for applications that fully utilize the possibilities offered by iPad's unique features has grown. iPad satisfies the general requirements of sending and receiving email, but loading a few applications can easily make it function as a quality e-reader, a cook book, a phone, a video game, a music player and you can even watch movies on it!

Applications already available for iPhone may fit iPad, but it's not a perfect fit. As the popularity of iPad grows, the demand for applications tailor-made for it increases. As a consequence, iPad application development has evolved as the most rapidly growing part of the software development industry. Companies experienced in developing solutions for iPhone have an upper hand in this field, owing to the similarity between the iPhone and iPad development.

An iPad, loaded with the right applications, will definitely provide the user with Entertainment. Innovative applications like Netflix that allow the users to watch television shows, favorite movies, sports and a lot more. Gaming corners a huge chunk of the iPad Application Development market and games like angry birds, Sudoku, shoot helicopters, and many other such applications, are downloaded on a large scale.

iPad users also use various utility apps help them convert temperatures, calculate their loan payments, keep an eye on the stock market trends, and more. Search tool apps, new apps, travel apps, sports apps, weather apps, productive apps of different kinds add spice and value to the life of iPad users.

The success of an iPad application development company depends on its ability to create innovative and original apps that entertain and help the users. Since the launch of iPad, 200,000 new applications have been developed. There is a huge market for creating ingenious and customized apps that add comfort and value to the customers. Along with the demand for more and more apps, there are more and more companies offering to develop iPad apps. In such an environment, the companies that quickly succeed in understanding the market trends and making optimum utilization of the iPad's special features will make their mark and corner a huge chunk of the iPad application development market.

Thursday, March 10, 2011

PHP Programming for Beginners - History of PHP

PHP is a general purpose scripting language that is well suited for server-side web development. It was created by Rasmus Lerdorf in 1995 and has been developing ever since. PHP originally stood for "Personal Home Page". He used the sets of Perl scripts he called PHP to maintain his resume and keep track of how much traffic his page was getting. He wrote these as "C programming language common Gateway Interface" which allowed the ability to work with web forms and databases. It also enabled users to start developing dynamic web application. He revealed and released PHP/FI or "Personal Home Page/Forms Interpreter Version 1.0" in June 8th 1995 to locate bugs and code improvement faster. This release had the functionality that PHP has today. The syntax was similar to Perl but more limiting and simpler.

PHP 2.0

A development team began to form. They spent months working and beta testing and released PHP/FI 2 in November 1997. Short after, the alphas of PHP 3 were released.

PHP 3.0

PHP 3.0 syntax to closely resemble of today's PHP, created by Andi Gutmans and Zeev Suraski in 1997. After finding out that PHP 2.0 was way underpowered for eCommerce application. Andi, Rasmus and Zeev decided to work together and announced PHP 3 as the successor of PHP/FI 2.0 and development of it was stopped soon after. The strength of PHP 3 was strong extensibility features. It also provided end users a solid infrastructure for lots of different databases, protocols and API. Another feature was the introduction of object-oriented syntax support. Approximately 10% of web servers on the internet had PHP 3 installed by the end of 1998. PHP 3 was released in June of 1998.

PHP 4.0

By winter of 1998, Andi and Zeev started working on rewriting the PHP's core. They goals were to improve the performance of complex application and modularity of PHP's code base. This new engine called "Zend Engine", met those goals and was announced in middle of 1999. PHP 4 was based on this engine. Additional features were added and were officially released in May 2000. PHP 4.0 included features such as support for many more Web servers, HTTP sessions, output buffering, more secure ways of handling user input and several new language constructs.

PHP 5.0

Today, PHP is being used by developers all over the world and installed on 20% of domains on the internet. The latest release PHP 5 was released in July 2004 and is driven by the Zend Engine 2.0 with new object model and tons of new features.

Wednesday, March 9, 2011

Power of Multimedia Product Presentation

The power of multimedia is making head-on progress in today's IT times. Kids, adolescent, and even grown-ups are seen caught in the flow of power-packed presentations. Recently two new products having similar features were launched by competitor companies. One of the companies invested fairly on the advertisements which included the gen-next tools and technologies. The use of various elements of multimedia presentation jazzed up the product. The result was obvious - the other company lost the field and the company who made use of multimedia technology gained huge popularities.

The importance of multimedia product presentation services cannot be overlooked in today's times. It forms a communication style that separates its user from the rest of the competitors. The product presentation service does play an important role in success of a product in the market. The case we just saw above clearly indicates the power of a good multimedia presentation services. It leaves an everlasting impression on viewers. A company providing multimedia product presentation services makes use of highly evolved tools and technology to produce a professional presentation using any of the below listed formats:

? Microsoft PowerPoint presentation
? Macromedia Flash presentation
? Macromedia Director presentation
? Any bespoke presentation builder

These are some of the prevalent modes of presentation. Imagine the impact of presenting a product list and its accompanied features to clients in a CD-which they can even carry back to their home and office. Gone are the days when paper brochures were in demand. Now is the time when user needs information that is instantly available and is easy to carry, in addition to it being highly impressive and self-explanatory. The use of multimedia product presentation services provides a high-tech edge to products and services. The tech-savvy world is looking for newer and better ways to present the same old material. Keeping in-track with these philosophies, the multimedia presentation facilitates to make that first impression impressive which ultimately changes to ever lasting impressions.

The multimedia presentations can be delivered in variety of formats available in market today. These software's developed can be delivered either on CD-ROM or can be uploaded on internet. There are numerous benefits of developing product presentation using the latest multimedia tools and technologies. Below are listed some of the important points:

? The products make an instant impression on tech savvy customer. The interest of customer in the products ultimately makes it acceptable by masses.
? Multimedia presentation helps in easy showcasing of products and its features in a desired format.
? The customers get to have a look at the accurate and worthy information of the products in digital format.
? It shortens the product sales cycle and saves the marketing personals from the unnecessary questions.
? The drive for 'Save Earth' is achieved as it reduces the inputs while printing brochures on paper.

SCMS offers full range of marketing strategies with use of multimedia product presentations services. This will help our clients to explore new horizons and better market segments for their products. We offer multitudinal service packages, ensuring productivity and longevity of our client's products by making presentation in a lucid and precise manner. We also develop product presentation slides with quite a style and difference.

Article Source: http://EzineArticles.com/?expert=Shania_Ellis

Tuesday, March 8, 2011

Remote Database Administration Service Can Solve Your Needs

The globalization has taken place, and the outsourcing and IT operation has become omnipresent. The term outsourcing indicates a process through which the project can be relocated outside the boundary of the corporate culture. Generally it deals with a third-party merchant. In today's economy, the companies are keen to curtail cost, rationalize the operations and gain competitive advantages. Many organizations are concentrating on management, human resource, capital and the other resources, and they emphasize on outsourcing to have the maximum benefits from their business operation.

The databases which provide a suitable storage facilities for the vast array of information help you store, search view and manipulate information based on the business vision, mission and goals. The basic job description of a data base administrator can be monitoring, backup, patching and troubleshooting consistently. The great demand of the uptime and the downtime of the dangers of the database make involve with the database outsourcing and the potential growth of remote database administration has taken place.

The database administrators are dealing with these factors in an implausible strain. DBA is utilizing them in more strategically. The global economy and the business is changing rapidly, and this trend is moving towards the Remote Database Administration to protect burnout, and staff turnover of DBA, besides they concentrate on reduction of the system weakness and increasing the productivity. While considering Oracle database service, it can come to our purview that Oracle is one of the most stylish Relational Database Management Systems, RDBMS. The Oracle is fully platform independent, variable, secured, fast and really dependable for dealing with OLTP, Online Transaction Processing techniques. It has become a real enterprise solution.

However the Oracle database can be corrupted and all the operation can be stopped. It happens due to different causes including the failure of the storage media or the corruption of the system. Generally the users have to have various corrective methods to overcome this debacle or to recover the Oracle database. If this system does not sustain, the users go for third-party Oracle database recovery software to fix the Oracle database.

The Remote Database Administrators serve remotely. RDBA solves problematic database issues involving with structural scale, programming and security. The monthly service includes remote database support for twenty four hours a day and seven days a week. This service also includes a friction of cost of a full time, the in house resource, the best remote management services and the IT practices. The merits of the RDBA, Remote Data Base Administrator can be the expert database implementation, the reduced capital and ongoing spending, and the return of investment which includes immediate and tangible return.

The consideration of the remote DBA program can be the cost effective as remote data base administrator pacifies the risk and provides the on-shore support. You can also have the dedicated senior DBA resources, immediate contact, monitoring tools and analytics of the third parties, trailing the cases and daily proactive system audits. Usually the organizations move towards remote Data base Administration to gain support of remote DBA. This also monitors MS SQL, MySwl, Sybase, DB2, Enterprise DB and the all other Platforms. It helps emphasize on core competencies, uphold the institutional integrity and decrease the computing cost of the enterprise.

Monday, March 7, 2011

New Mathematical Algorithm Needed to Defeat Incoming Swarm Ordinances

It would seem to me that the future of warfare will be that of robotic swarms of UAVs or missiles attacking our Navy ships and aircraft carriers. You can also expect that when the US Army is going to defend a country or region, that they will be the target of a barrage of rockets, or incoming swarm of robotic munitions of death. Therefore, we need a better system, and a way to target each one of these incoming objects. Right now, the future looks as if it will be one with a laser defense system.

That makes sense, because if you are fighting a swarm, you can use up all your ammunition trying to shoot down a swarm, and then it's just a matter of attrition of munitions. The force with the greater number of projectiles for offense or defense, will eventually win, as the other side runs out of munitions designed to defeat Incoming rockets.

Therefore, I propose DARPA send out a bid for some super mathematicians to come up with a mathematical algorithm, which will be needed to defeat Incoming swarm ordinance. Here is how I envision it will work. The radar system attached to the automatic guns or laser system will figure out how many projectiles are coming in within a 3-D grid area of the sky, and target the first and closest rocket in order to explode it, so that that shrapnel and debris will take out the other rockets coming in behind them.

The system will then hit the most likely and probable targets which have the greatest probability of taking out additional targets within that area randomly. If a laser system will then find another target of opportunity and keep doing the same in rapid succession. Each time one kill turns into 10 or 20, and this is important because the laser doesn't have enough time to stay on each target when a massive swarm is approaching.

Then, the laser will back up to a closer column or row of the 3-D grid, and do the same. Eventually it will have taken out enough incoming projectiles to take the swarm down to an adequate size, and then as they get closer the military defensive systems (Aegis type) rapid firing gun, or a large number of this type of weapon system can complete the job protecting our team and assets. Now then, I am quite certain I'm not the only person to have considered this, but it would seem that right now more than ever we need to be looking into these types of defense systems.

It might also work well for an offense system when sending in fighter aircraft, where the enemy is firing massive numbers of surface-to-air missiles with proximity fuses. If you can hit them prior to their release from afar, or from an aerial laser system, you can greatly improve the survivability of your attacking force, once it gets within range to be spotted. Please consider all this.

Lance Winslow is the Founder of the Online Think Tank, a diverse group of achievers, experts, innovators, entrepreneurs, thinkers, futurists, academics, dreamers, leaders, and general all around brilliant minds. Lance Winslow hopes you've enjoyed today's discussion and topic. http://www.WorldThinkTank.net - Have an important subject to discuss, contact Lance Winslow.

Article Source: http://EzineArticles.com/?expert=Lance_Winslow

Lance Winslow - EzineArticles Expert Author This article has been viewed 3 time(s).
Article Submitted On: February 04, 2011

Sunday, March 6, 2011

Code Review Done The Right Way

Code reviews are one of these practices that we do all the time, but how to make sure they are really useful? What do you want to achieve by conducting?a code review??Check if you completed the task correctly??Show how good?you are? Discover bugs??See if anyone else agrees on your coding standards??For me the most important parts of a?code review are to:?
validate the designdiscover bugsshare knowledge

Then how to conduct the review to achieve?these three things? There is at least two major?ways to do a review. Online and offline. Online, you let the author guide you through the code. Offline, the author have to provide you with?the code, and you have to walk through it yourself in order to provide any?feedback.?My experience is that online reviews only work for tiny changes, like for?a few lines of code. You?always end up having the author guide you through the code, which in turn?makes you miss the same bugs and design flaws that the?author did. Knowledge?sharing is usually limited since you will miss?most of the implementation details.?So?if someone asks you to do a review, and it has?more than a few lines of?code, make sure to do it offline. If not doing it offline you?end up spending a whole lot of time achieving?noting related to validating the design, finding bug or achieving?any knowledge sharing. In such a case you are better off spending your time one something else.

Knowledge?sharing ?might be one of those areas that are forgotten the most as part of a code review. Usually you will have a limited set of persons?on the team?developing a new feature.?At some point you need to make sure?that other persons on the same?team, or another team,?can manager to help out on that feature. The?author might?get sick or?for other reasons be unavailable. To remove this risk you need to include several people in the review - at least two. That way more developers will know about?how the feature is implemented, and chances are those people will be able to quickly start?working on that feature?if needed.?

Reviews are also?a great?opportunity to train and educate new member?on a team. If you have any junior member or new persons on the team, make sure to?include?them regularly?in the review process. Not just by renewing?their code, but?by?letting them review too.?It will be a great learning experience for them, and most probably they will provide some really good feedback too.?This enables them to learn a lot about the development practices at your company, and also learn about?new features?which they probably will work on later.

So to summarize; put aside time to do the review, do it offline and?have more than one person doing the review.

Saturday, March 5, 2011

Make Your Own Games: How Can Bitmaps and Vector Graphics Make Your Game Look Professional?

If you want to learn how to make your own games, there are several things that you need to know first. One of the most important areas in Game Design and Development is the area of Graphic Design.

The Graphic Design in a game can make all the difference between a poor looking game and a professional looking game. In Game Design, there is one distinction that is particularly useful, and that is the distinction between Bitmaps and Vector Graphics.

Vector Graphics are graphics created based on mathematical equations. They store images in the form of lines, colors and vectors (points), which makes it so that it is very easy to scale Vector Graphics without loosing quality in your image's information. Actually, the vector image needs to be rendered before it can be seen, as it is stored in the form of mathematical information to be interpreted visually.

Bitmaps (raster images) on the other side are built pixel by pixel. Each dot (pixel) on the image has color information. This is how digital photography is stored, and it has the main disadvantage of not allowing you to scale the image without the computer having to invent information, which very easily leads to a kind of visual noise know as pixelization.

What pixelization is is that pixels (dots) in the image are so big that they are actually visible. This was the case in old computers (8-bit graphics, for example), and is used today as a graphic design technique to give images a bit of a technostalgic look.

Now, how can bitmaps and vector graphics make your game look professional?

First of all, know that vector graphics are created in a different manner. They are more like illustrations or drawings that can be easily modified (and for example, this is used as a 2D animation technique). The thing is, you can easily use vector graphics to make animations and even games (Flash is a good example of this) that are "illustration based".

On the other hand, bitmap manipulation will help you in the design of interfaces and giving that feeling that you want to give to your buttons, menus, displays... The thing is, if you do not know what vector images and bitmaps (raster images) are, and how to use them, this will be a great start to get you rolling on your graphic design.

You can either try Adobe's Photoshop and Illustrator solutions or go for the free alternatives (great and very powerful, I recommend them): The Gimp and Inkscape.

You will discover that most of the time you can combine the power of Vector and Raster Graphics to make your designs much more powerful. From the sprites (if you are working on a 2D game) to textures (if you are working on 3D), to the interface and the motion graphics for your video games animations, there are many ways in which Bitmaps and Vector Graphics will make your designs much more attractive and professional, thus making your video games look much more professional (and sell better).

Thursday, March 3, 2011

How To Find People Using An Email Address

Learning how to find people using an email address is not really that hard, but you might not get all the right information the first time you try it. There are tons of free services out that work as a people finder. The use public records to search for the criteria you enter in. Then they pull up whatever they find. This might work wonderfully if the person stays on top of putting in the change of address notifications or puts their phone numbers in the phone book. Unfortunately, many people are using cell phones now and not listing the numbers, and it usually takes a little while for the change of address to go through even if they submit it when they are supposed to.

I am not saying that all the free services will give you faulty information, and as you can see it might not even be their fault. The fact remains that you might get the information you want on how to find people using an email address with a free service, and you might not. Now, there are paid services that offer the same things, but they charge a price because they pay fees to get access to different databases for the information you want. One of them is probably a database for cell phone numbers, and since most people have one it would be a great one to access. Since people work more online now, they are more likely to keep their online records up-to-date before they worry about the others.

The methods for both services are usually the same. You find out how to find people using an email address by going to the site and choosing the method of search you want (the usually offer more than one). All you have to do after that is enter in the information you have and hit search. It then runs through all the databases it has access to and gathers all the data that matches what you are looking for. You should then see another screen pop up with all of the results on it. Hopefully, it finds the person you were looking for. Like I said though, that might not always happen with a free service, but it is usually worth a try.

These types of services are about the only way I know of to learn how to find people with an email address. I have never seen a search tool on any email server that offers you to look people up, even if they have that server's address. I cannot think of any other means, since it is not like there is a phone book listing for people's email addresses. It is up to you, but it is relatively easy to accomplish, and fast (depending on your connection).Though, even dial-up should not be too terribly slow. It might take a few more minutes to work, but it should not take hours. Remember, you can always try the free services too if you just want to get a better idea of how it might work.

Wednesday, March 2, 2011

Demystifying File Transfer

File transfer is the act of communicating files from one computer to another, often called uploading or downloading. Most of us do this every day over the internet or by email through the use of email attachments and this behaviour has become so common that we very often have very little knowledge of what is actually involved. Essentially the process is reliant on particular digital codes called 'protocols' which consist of digital messages and which may include such processes as authentication, signalling and error detection and communication as well as dictating the syntax and semantics of the file to be transferred. The standard protocol in use is literally called 'FTP' - File Transfer Protocol and it is used over the internet within a specific suite called 'TCP' or 'IP', TCP meaning 'Transmission Control Protocol' and IP being 'Internet Protocol'. As you can gather, within this suite there is a whole set of communications protocols of which FTP is the most well known in terms of the file transfer process. The TCP/IP suite is essentially a set of layers with each layer responsible for solving a particular set of problems.

Primarily, file transfer is the responsibility of file servers and there are two basic types of transfer, the first of these is called 'pull-based' because the request for information is requested by the client and the second is 'push-based' in which a server automatically dispatches information to the client, very often in response to a pre-arranged subscription, for example a membership of a magazine website. File transfer is executed using a variety of methods, for example 'transparent' file transfers occur without the client necessarily being aware of it, whereas 'explicit' file transfers operate in response to client demand often following authentication procedures such as a 'signing-in' operation. File transfers can operate on a peer-to-peer basis on a computer network where the same information is distributed equally between a number of workstations. There are also file transfers regularly occurring between workstations and peripheral devices such as printers, scanners, webcams and so forth.

As I mentioned a moment ago, FTP - File Transfer Protocol is the standard protocol used within the TCP/IP suite, but that doesn't mean to say necessarily that it's popular, in fact some people out there think that FTP can be a bit of a hassle. According to a blogger on the Send This File website who chooses just to call himself 'Alex', the main issue with FTP is that of security. "FTP has many security issues" he says "including bounce attacks, spoof attacks, brute force attacks, packet sniffing, username protection and port stealing. Perhaps the biggest risk with FTP is that the FTP server can only handle usernames and passwords in unencrypted plain text".

Of course, 'Alex' may very well be merely trying to sell you a new kind of transfer protocol or procedure, particularly in view of the fact that the website on which his blog appears is devoted to a particular internet-based transfer service which claims not to require any downloading of software, any email attachments and which has no apparent limit on the size of files to be transferred. Actually, in reading this I'm suddenly quite interested and I suddenly find myself bookmarking this website for future reference. In the meantime however, you may be confused somewhat by 'Alex's' reference to all sorts of attacks and so forth. What is he going on about?

This can all get very technical and complicated, but I've managed to make some sense of it, I think. Let's start with the 'bounce attack'. This is a method of accessing a computer by exploiting a gliche in the FTP protocol using the 'Port' command and is a method of accessing a machine through an indirect route, that is by scanning the server for open ports (a 'Port' is the end destination of a particular communication, often identified by a 'port number' and the IP address of the user of the destination computer). However, these days it seems that modern FTP servers are programmed by a default command to refuse Port commands that do not connect to the originating machine, thus thwarting bounce attacks.

A spoof attack is an attack that is conducted using a disguise, that is to say the person originating the program (the attacker) disguises the program so that it looks like another program that the victim might be interested in. Some of these kinds of attack are known as 'man-in-the-middle' attacks whereby the attacker intercepts communications between two computer operators and in the process convinces both that the other is the attacker. A more well known form of spoof attack is 'phishing' whereby a victim's web page is reproduced (for example a web page belonging to a bank). When people who may hold accounts with the victim's organisation access the web page, it steals their passwords and account information.

A brute force attack is one directed against a site that is encrypted. The attacker checks all the cryptographic keys that are used in the encryption program until the right keys are found at which point the attacker can then gain access.

Packet sniffing involves the use of a network analyzer which intercepts communications traffic passing over a network. It captures each 'packet' (a unit of data) and decodes and analyzes its contents.

In order to understand 'Port stealing' it is necessary to know that network interfaces are identified by an MAC address (MAC = Media Access Control) these connect with Ports (the end destinations of communications referred to earlier) via something called a CAM table (CAM = Content Addressable Memory) which is a mechanism used by something called a 'switch' which copies bits of information very quickly from one Port to another. This is very complicated but basically it enables multiple communications to occur at the same time. A 'Port stealing' attack occurs when an attacker introduces his own MAC address into a network and redirects information to it.

So, it seems there are a number of problems with FTP and Alex's Send This File website isn't the only company out there offering an alternative, there are others out there as well such as yousendit.com. and transferbigfiles.com. Essentially, companies such as these are the online equivalent of couriers who transport packages between busy city offices. Yousendit also issues clients receipts which they can click on to download the file. However, such 'courier' services aren't the only options. Other companies such as Sysax offer software that transfer files using FTP but with the added advantages of automation in order to ease the operation of complex tasks and debugging.

FTP isn't the only method of file transfer, although it is the oldest. There is also something called P2P software (Peer to Peer) which is specifically intended to transfer large files efficiently. One example of P2P software is BitTorrent, but there are others such as BearShare and Ares. Very often they are used to transfer and share music files, videos and other such information. There is also the popular 'Instant Messenger' application such as MSN and AIM while MS Windows has a file-sharing facility which allows you to share some of your files with other computer users by designating certain files on your hard drive as suitable for sharing on the network.

The simplest method of transferring files however is the standard email attachment, which everybody uses. The main problem with this method though is that they suffer from the disadvantage of often being only capable of transferring small amounts of data, although compressing files before emailing them may help to counter this. There are also certain services available to assist with emailing large files, such as mailbigfile.com a free email sending service, along with the other courier style companies I mentioned previously.

At the end of the day transferring information across the net largely depends on the speed and power of your computer or computer network and the risks that may be involved. Fortunately there are a wide variety of methods for transferring data, not just involving FTP or email attachments. It is therefore very worth while just logging on and seeing what is available on the net if you need assistance.

http://www.millhousedata.com/
'A Free Alternative to FTP' by 'Alex', Send This File: The Trusted File Transfer Service,
Wikipedia
Sysax

Robin Whitlock
I am a freelance writer, researcher and administrator with an interest in many contemporary issues across a wide variety of genre's and business sectors. I have a particular interest in energy and the environment which is the main theme of my blog. I have been published in a wide variety of magazines since I started writing in 1997 and I also write regularly for the social media forum of a technical recruitment consultancy based in Milton Keynes. More recently I have started writing articles for the website of a business software company and also work as an online data input administrator for a London-based research company involved in gathering investment information for the food and renewable technology industries. I am a graduate of Bath Spa University with a BA (Hons) in Psychology and English (2/1).

Article Source: http://EzineArticles.com/?expert=Robin_Whitlock