More About Me...

Ebook For Programmer and IT Support , Free and Free...

Another Tit-Bit...

It's all about IT , Ebook , Tutorial , News ... Try to Share.... Technoly is Never Ending and Knowledge is Free....

Intel's Moorestown platform to get 3.5G support

HSPA support in Moorestown hints Intel recognizes that users will want an alternative to WiMax for connecting wirelessly outside of Wi-Fi hotspots

Intel's upcoming Moorestown chip platform will include optional support for high-speed cellular data services when it hits the market in 2009 or 2010, Intel said Monday.

Moorestown will be based on Lincroft, a system-on-chip that includes an Atom processor core and a memory controller hub, and a chip set called Langwell. Designed for small, handheld computers that Intel calls MIDs (mobile Internet devices), Moorestown will offer optional support for both WiMax and HSPA (High Speed Packet Access) cellular networks.

[ See related story, "Intel video shows first Moorestown device." And get the latest on mobile developments with InfoWorld's Mobile Report newsletter. ]

Intel is heavily pushing WiMax, which it sees as the best option for future wireless broadband services. But WiMax availability is very limited and it will take time for networks to enter commercial operation and expand their coverage areas. The addition of HSPA support to Moorestown hints that Intel recognizes that WiMax may not be extensively deployed as quickly as it would like, and users will want an alternative way of connecting wirelessly outside of Wi-Fi hotspots.

This isn't the first time Intel has flirted with offering 3G support to computers. In 2007, the company shelved an agreement with Nokia to provide 3G modules for Centrino laptops, saying customer interest in the technology was lukewarm.

That appears to be changing. At the Intel Developer Forum in San Francisco during August, Belgium's Option showed off HSPA modules it developed for MIDs based on Intel's Atom. On Monday, Intel announced that Option and telecom equipment maker Ericsson will make low-power HSPA modules that will be offered as an option with Moorestown.

Intel is making its own WiMax module for Moorestown. The module, code-named Evans Peak, made an appearance at the Ceatec show in Japan during late September.
Ericsson achieves 100Mbps rates in LTE trials

Ericsson expects that the first commercial network of LTE next-generation mobile technology will go live in the fourth quarter of 2009

Ericsson has managed to achieve rates in excess of 100Mbps with next-generation mobile technology LTE (Long Term Evolution) during recent field trials.

LTE is pitched as a successor to the 3G (third generation) mobile services such as the European UMTS (Universal Mobile Telecommunications System) and similar wide-band CDMA (W-CDMA) services.

[ For more on LTE and its struggle to become the dominant architecture for broadband wireless infrastructure , read "The looming battle over wireless broadband." And find out more about rival WiMax in InfoWorld's report "Does WiMax work in the real world?" ]

Ericsson's goal in the field trials was to show that LTE works all the way from the base station to the terminal. "It's always easy to say that you can get a certain speed in a lab environment, but here we have used real antennas and real distances to the terminals, and also in a moving vehicle," said Lars Tilly, head of research at Ericsson Mobile Platforms.

Using four transmit streams (the maximum number supported in the LTE standard), four receive antennas and bandwidth of 10MHz, the measured peak rates exceeded 130Mbps. This translates into approximately 260Mbps, given the maximum bandwidth of 20MHz, according to an article in Ericsson Review.

"Not everyone will be able to get 100Mbps. You need pretty good conditions for it to work, and you need to be relatively close to the base stations, a couple of hundred meters," said Tilly.

The company also evaluated application-level performance using two transmit and two receive antennas, and the TCP (Transmission Control Protocol) bit rate was more than 40Mbps at least 50 percent of the time and more than 100Mbps at least 10 percent of the time along a test route, which a majority of the time stayed within 1 kilometer from the test site.

The test also shows how important it is to use MIMO (Multiple-Input Multiple-Output) to get the most out of LTE. Using four transmit and receive antennas increase performance by a factor of three compared to a basic setup. But at the same time Ericsson warns that MIMO-related gains are strongly dependent on radio conditions.

All the major telecommunications equipment vendors are currently working at full speed to get LTE out the door, according to Martin Gutberlet, analyst at Gartner.

He isn't worried about the base stations. Instead it's the lack of access to the necessary spectrum, which still hasn't been handed out in many European countries, including U.K., France, and Germany, that could lead to delays, according to Gutberlet.

Ericsson expects that the first commercial LTE network will go live in the fourth quarter of 2009, according to a spokeswoman.
Google set to release Android source code
By making the source code for its mobile OS open, Google expects that a wide variety of applications will appear as well as cheaper and faster phones

Google planned to announce on Tuesday that the source code for its mobile operating system, Android, is now available for anyone to use free. The move was expected, although the timing was uncertain.

» Back to special report: Google Android: Invader from beyond

Developers can find the source code on the Web site for the Android Open Source Project.

[ Read the review of T-Mobile's new Android-based phone, and take InfoWorld's slideshow tour of the T-Mobile G1. | Catch up on all the developments with Google Android in InfoWorld's special report. ]

"An open-sourced mobile platform, that's constantly being improved upon by the community and is available for everyone to use, speeds innovation, is an engine of economic opportunity and provides a better mobile experience for users," said Andy Rubin, senior director of mobile platforms for Google, in a statement.

The first Android phone isn't yet on the market -- the G1 goes on sale in the United States from T-Mobile on Wednesday. Journalists were first able to publish reviews of the G1 last week.

Google expects that by making the source code for the operating system open, a wide variety of applications will appear, as will cheaper and faster phones.

But Google's model for Android has some critics. The LiMo Foundation, which publishes specifications for middleware for mobile Linux devices, and of which Google is not a member, says that Google's model might be too open.

"There's a debate about whether Google's approach to openness is sustainable and good for the industry," said Andrew Shikiar, director of global marketing for the LiMo Foundation.

Android will be released under the Apache license, which doesn't require developers to share their changes to the code back with the community, he said. This is one of the reasons why some people wonder whether Android will become fragmented as various incompatible versions of the software appear in phones across the market.

In the FAQ section of the site for the Open Handset Alliance, the group supporting Android, Google says that using the Apache license will let manufacturers innovate on the platform and allow them to keep those innovations proprietary as a way to differentiate their offerings.

Shikiar floats a more sinister reason that he's heard for why Google may have chosen the Apache license. "If it's fragmented and scattered, and the only common version is the Google-optimized one, it's good for them," he said. That's because the G1, which is optimized by Google, comes loaded with many Google services that can eventually bring in revenue for the search giant. If that turns out to be the best version of an Android phone, more people will use it and so, presumably, more people will be using Google apps.

LiMo and Symbian, which also is going open source, each use different licenses, but both include obligations for people who change their code to share their changes, Shikiar said.

Shikiar also criticized Google because he said the search giant hasn't created any sort of governance model for the Open Handset Alliance and doesn't publicly publish the group's membership agreement. A governance model spells out for participating companies exactly how their intellectual property can be used by other members. Without it, members might be reluctant to contribute, he said.

The OHA did not reply to questions recently posed regarding its choice of license and its governance model. Google also was not immediately able to respond to similar questions.

Top 10: Microsoft's bug, Greenspan speaks, Android launches

This week's roundup of the top tech news stories includes Microsoft's critical bug, Android's launch, the tech economy, and more

Soon after Microsoft released a patch for a critical bug in its Windows Server software, attack code surfaced, and by Friday afternoon an early sample of the code was out, which led to the week ending on a warning note. Between the beginning and the end of the week, former Fed chairman Alan Greenspan blamed the U.S. economic crisis at least in part on the use of bad data. Perhaps next week will bring better news.


1. Attack code for critical Microsoft bug surfaces and New worm feeds on latest Microsoft bug: It didn't take long after Microsoft provided information about a critical Windows flaw, along with a patch, before attack code showed up. Developers of the Immunity security testing tool had an exploit written within a couple of hours of Microsoft's announcement on Thursday. Although the developer's software is only for paying customers, security researchers said they expected a version of the code to go public soon. That happened Friday afternoon when sample code appeared on the Web. The flaw, in Windows Server service, which is used to connect network resources, was also being exploited by a worm.

[ Video: Catch up on the news of the week with the World Tech Update ]

2. Greenspan, Cox tell Congress that bad data hurt Wall Street's computer models: Insufficient and faulty data used in risk management models contributed to the financial mess embroiling the U.S. and rippling across the globe, said former U.S. Federal Reserve chairman Alan Greenspan. Financial firms made business decisions using "the best insights of mathematicians and finance experts, supported by major advances in computer and communications technology," Greenspan told the House Committee on Oversight and Government Reform. "The whole intellectual edifice, however, collapsed in the summer of last year because the data inputted into the risk management models generally covered only the past two decades -- a period of euphoria."

3. Microsoft expanding Surface access: In order to get the SDK for Microsoft's touch-based apps platform, developers had to buy Surface hardware, which could be a pricey proposition. Well, no more: Microsoft will give the SDK to developers who attend a Surface workshop at its Professional Developers Conference next week.

4. Android phone launch day relatively quiet: Google's Android phone went on sale Tuesday, with people here and there standing in short lines outside of stores to be first to get their handsets. While there wasn't anything approaching the buzz surrounding the first iPhone sales, T-Mobile stores reported a steady stream of customers for its G1 phone, which is the first on the market to run the Android operating system.

[ Special report: All about Google Android ]

5. Intel repudiates executives' criticism of the iPhone: Comments from Intel executives who criticized the iPhone weren't appropriate, Intel said, after reports on the statements emerged from the company's developer forum in Taipei. Shane Wall and Pankaj Kedia said the iPhone is slow and incapable of running the "full Internet" because the smartphone has an Arm processor instead of, you guessed it, an Intel processor. "Apple's iPhone offering is an extremely innovative product that enables new and exciting market opportunities. The statements made in Taiwan were inappropriate, and Intel representatives should not have been commenting on specific customer designs," the company said later in a statement posted on its Chip Shots Web site.

6. Gmail activation problem in Apps finally solved: A problem was finally solved this week with Google Apps that kept those who recently subscribed to its Web-hosted office suite from being able to get to their new Gmail accounts. The problem kept Gmail accounts from being activated for new Apps users, starting late last week. The company said Monday the problem would be fixed by Tuesday, but it didn't work out that way, to the consternation of many Apps users, or would-be users.

7. Sun tussles with startup over noted systems designer: In an oddball of a story, startup Arista Networks set off a mini firestorm with Sun Microsystems when it announced that Andreas Bechtolsheim is the company's new chief development officer. Bechtolsheim, you see, is Sun's chief scientist and a top-notch systems designer, so Arista's news led to reports that he had resigned from Sun, which Sun denied, sending e-mail to journalists saying those reports were inaccurate and that he would continue at the company, though part time. That led Arista's director of marketing, Mark Foss, to say that as far as the startup is concerned Bechtolsheim is working full time at Arista, and that there was "a miscommunication" between his company and Sun that they were working to clarify. Bechtolsheim then did the clarifying -- he works full time now at Arista, which he cofounded and where he also serves as chairman, but he's going to advise Sun on a part-time basis of "no more than one day a week."

8. Intel shows off new laptop platform: Users got a glimpse of Intel's upcoming laptop platform, code-named Calpella, at the Intel Developer's Forum in Taiwan. The primary focuses of Calpella are efficiency and battery life.

9. Microsoft looks to secure Web content: At its Professional Developers Conference next week, Microsoft will show off its Web Sandbox initiative, which seeks to secure Web content by isolating it. The technology includes a cross-browser JavaScript virtualization layer that provides a secure standards-based programming model without requiring any add-ons.

10. Where the presidential candidates stand on tech issues: Both Democrat Barack Obama and Republican John McCain bring technology experience to the table as presidential candidates, though the experiences are quite different. Obama is an avid user of technology -- he's among the capital's BlackBerry enthusiasts -- while McCain admits he's not much for using electronic devices, but he has been on the Senate Commerce, Science and Transportation Committee for a long time, and a lot of technology-related legislation passes through that group before going to the full Senate. IDG News Service took a look at where they each stand on five key technology areas: telecommunications, national security, privacy, IT jobs, and innovation.
Office 2007 Service Pack 2 due in spring '09
SP2 of Office 2007 will introduce support for ODF and PDF as well as a more reliable calendar and faster performance for Outlook and other improvements

Microsoft said via a company blog Wednesday that Service Pack 2 (SP2) of Office 2007 will ship between February and April of next year.

The software maker had already said that SP2 will introduce support for the Open Document Format (ODF) used by Office's chief competitor, OpenOffice.org, the Portable Document Format (PDF) created by Adobe Systems, and its own XML Paper Specification (XPS) that is meant to compete with PDF.

The Office Sustained Engineering blog confirmed those features, and some others:

A more reliable calendar and faster performance for Outlook 2007;
Improvements to Excel 2007's charting;
Enabling Object Model support for charts in PowerPoint 2007 and Word 2007;
An uninstallation tool for Office 2007 service packs;
Improvements to server editions of Office 2007.

This is in contrast to SP1 of Office 2007, released last December, which mostly provided bug fixes rather than new features.

Office 2007 was released to businesses in November 2006, the same time as Windows Vista, with shipments to consumers and small businesses in January the following year.

Office 2007 was far different than prior versions, using a new "Ribbon" interface. Despite the risk of customer rejection, Office 2007 has been widely considered a sales and marketing success, unlike Vista.

Microsoft said it plans to divulge more details in Office blogs in the next few weeks. It will also begin inviting Office enterprise customers to a private SP2 beta in the next few days, which may or may not turn into a public one.
Google patches Chrome 'carpet bomb' bug
The months-old bug can be used to trick people into downloading and launching malicious code
Google has patched its Chrome browser to block a months-old bug that can be used to trick people into downloading and launching malicious code.

The fix has not been pushed out to most users, however.

[ For more on Google's Chrome browser, see InfoWorld's special report. ]

The security researcher who reported the vulnerability, which involves a combination of the "carpet bomb" bug with another flaw disclosed in August, called the fix "enough for the time being," but said Google's patch wasn't the final word.

Google plugged the hole in a developer-only version of Chrome that has not yet been sent to all users via the browser's update mechanism. Chrome users, however, can reset the browser to receive all updates, including the developer editions, with the Channel Chooser plug-in.

According to a Google blog, Chrome 0.3.154.3, which was released last week, changes the browser's download behavior for executable files, such as .exe, .dll, and .bat files on Windows.

"These files are now downloaded to 'unconfirmed_*.download' files," said Mark Larson, Chrome program manager, in the blog post. "In the browser, you're asked if you want to accept the download. Only after you click Save is the 'unconfirmed_*.download' file converted to the real file name. Unconfirmed downloads are deleted when Google Chrome exits."

Last month, Israeli security researcher Aviv Raff demonstrated how hackers could create a new "blended threat" -- so named because it relies on multiple vulnerabilities -- to attack Chrome. Raff's proof-of-concept code used an auto-download vulnerability (aka "carpet bomb") along with a user interface design flaw and an issue with Java.

Chrome contributed to the vulnerability by making downloaded files appear as buttons at the bottom of the browser's frame, Raff said then.

Tuesday, after examining the 0.3.154.3 developer build, Raff proclaimed the fix sufficient for the short term, but nothing more. "The fix is not good enough. [But] it's enough for the time being, until other small issues might popup and be used to exploit the auto download problem," Raff said in an interview conducted via instant messaging. "The best solution was if they just won't download the files until the user approves, or download them to a random directory..., as it's done with other browsers, like Internet Explorer's Temporary Internet Files folder or Firefox's random profile directory."

On the plus side, Raff said, Chrome shows the full filename -- the "unconfirmed_*.download" that Google's Larson described -- so that users can see if the file is, in fact, an executable and potentially dangerous.

But Chrome still has holes. "Even if [Google assigns executables] a random filename, it might still be possible to predict the downloaded filename," Raff said. "They delete the automatically downloaded files only after the user shuts down the browser. What happens if the browser crashes? The malicious files might still exist after the crash."

The best solution would be for Google to prevent any files from downloading through Chrome without user permission. "I think that downloading any file without user interaction to a predictable location, [for example] the default download directory, is still bad," Raff argued. "Even if the extension is not an executable, there might be other ways to execute those files. For example, through the Windows command line you can execute any file with a PE header, even if they have a different extension."

Chrome accounted for less than 1 percent of the browser market share during its first month of availability, according to data from Net Applications.
Attack code for critical Microsoft bug surfaces
Security developers were able to write an exploit code in two hours after Microsoft released an emergency patch

Just hours after Microsoft posted details of a critical Windows bug, new attack code that exploits the flaw has surfaced.

It took developers of the Immunity security testing tool two hours to write their exploit, after Microsoft released a patch for the issue Thursday morning. Software developed by Immunity is made available only to paying customers, which means that not everyone has access to the new attack, but security experts expect that some version of the code will begin circulating in public very soon.

Microsoft took the unusual step of rushing out an emergency patch for the flaw Thursday, two weeks after noticing a small number of targeted attacks that exploited the bug.

The vulnerability was not publicly known before Thursday; however, by issuing its patch, Microsoft has given hackers and security researchers enough information to develop their own attack code.

The flaw lies in the Windows Server service, used to connect different network resources such as file and print servers over a network. By sending malicious messages to a Windows machine that uses Windows Server, an attacker could take control of the computer, Microsoft said.

Apparently, it doesn't take much effort to write this type of attack code.

"It is very exploitable," said Immunity Security Researcher Bas Alberts. "It's a very controllable stack overflow."

Stack overflow bugs are caused when a programming error allows the attacker to write a command on parts of the computer's memory that would normally be out of limits and then cause that command to be run by the victim's computer.

Microsoft has spent millions of dollars trying to eliminate this type of flaw from its products in recent years. And one of the architects of Microsoft's security testing program had a frank assessment of the situation Thursday, saying that the company's "fuzzing" testing tools should have discovered the issue earlier. "Our fuzz tests did not catch this and they should have," wrote Security Program Manager Michael Howard in a blog posting. "So we are going back to our fuzzing algorithms and libraries to update them accordingly. For what it's worth, we constantly update our fuzz testing heuristics and rules, so this bug is not unique."

While Microsoft has warned that this flaw could be used to build a computer worm, Alberts said that it is unlikely that such a worm, if created, would spread very far. That's because most networks would block this type of attack at the firewall.

"I only see it being a problem on internal networks, but it is a very real and exploitable bug," he said.
Perl 6 isn't vaporware
A Perl project contributor takes me to task for selling the next-gen dynamic language short

At least one of you was a little miffed at something I said in last week's post about dynamic languages and virtual machines, and there was probably more than one of you, so I thought it would only be fair to air the issue in the open. Specifically, on the subject of Perl 6, I declared, "some would say it has officially graduated to vaporware status."

First, I should apologize. Weasel words like "some would say" and "many believe" are the crutches of lazy journalists everywhere, and I shouldn't have fallen back on such phrasing. Let me come clean, then, and confess that the "some" includes me, and from here on, I speak for myself. Perl 6, in my opinion, is pretty much vaporware.

As I said, however, not everyone agrees. (I'll leave it to someone else to decide whether the dissenting base consists of "some" or "many" people.) Reader "chromatic," a longtime contributor to the Perl codebase and the online managing editor of O'Reilly Media, weighs in:

I believe anyone who considers Perl 6 or Parrot to be "vaporware" has not bothered to look at the project or ask anyone involved with it about its current state. It's a small nit in an otherwise correct article, but it's a glaring nit.

As far as I've always heard, "vaporware" meant "a project, long promised by marketing, which doesn't actually exist." For that to be true of Perl 6, we'd have to have a marketing department (which we don't) and Perl 6 would have to not exist (which it does).

At this point, I should own up some more. I admit that I am not as in-tune with the Perl community as I once was. Years ago, back when I wrote more lines of code than sentences, I hacked out my share of incredibly functional Perl scripts and CGIs. I've since recanted. I've gone over to the camp that says Perl's loose C-like syntax encourages bad habits and results in maintenance-proof code, and I now think that Python is a better choice.

That said, I also have a hunch that a lot of professional developers are starting to agree with me. I don't hear much about large organizations -- such as Google, for example, or Oracle -- doing much with Perl. But here chromatic takes issue again:

How about Oracle (ships Perl), IBM (ships Perl), Microsoft (used in several build systems), Amazon (entire front end written in Perl), Morgan Stanley (heck, most of Wall Street, most of London's financial institutions)....?

Amazon's front end written in Perl? I'll take his word for it. But that's Perl 5; Perl 6 is another story.

I can accept that maybe Perl 6 "exists," as chromatic claims -- but so do countless other experimental languages, in labs and open source projects. At one time Ruby was just such a curiosity. The difference is that a couple of years went by and all of a sudden everyone is abuzz about Ruby. Major Web applications, such as Twitter, are now running (or stumbling) on Ruby. Where's Perl 6?

I fixed at least six bugs in Parrot today (which probably brings me to ten for the week, if not more). You can see my checkins at cia.vc or Ohloh and review the tickets I've closed at rt.perl.org. If Parrot and Perl 6 don't exist, what exactly what I was working on?

But come now -- there's existing and then there's existing. It only took two years to go from version 2.4 of the Linux kernel to version 2.6. That was a pretty significant upgrade, with an awful lot of people relying on that particular piece of software to work, work well, and work consistently. And yet they pulled it off. Further, it took only nine years to go from version 1.0 of the Linux kernel to version 2.6.

Meanwhile, Perl 6 has been in development for eight years, and there's still no production release in sight. And don't tell me there have been lots of upgrades to Perl 5 in the meantime; that might be true, but it doesn't count when everyone's supposed to be planning for an earth-shattering, backward-compatibility-breaking release that promises to be so important that Larry Wall started throwing around the word "apocalypse."

To me, it doesn't matter if there's a binary called Perl 6 that I can execute or not. How can the Perl 6 language not be vaporware if the Perl hacker community can't use it for real-world jobs?

Let's be generous and say that between Patrick [Michaud]'s funding (which has expired) and Jonathan Worthington's funding and Daniel Ruoso's funding ($3000 for SMOP, an alternate implementation), Perl 6 has 0.5 paid full-time developers. Off of the top of my head, I can name a couple of *dozen* full-time paid [Linux] kernel developers. That's at least an order of magnitude more potential work in that period. Even Fred Brooks might agree that, sometimes, more people can get more work done.

In my mind the question of "vaporware" hinges on "does it actually exist?" If you want to raise the question, "sure, it may exist, but will it ever be stable and widely deployed and ready for production use," that's a very different question -- but I don't believe that's a question of vaporware.

I published working Perl 6 code three years ago. That means people could have downloaded, read, run, and modified working Perl 6 code every day for over a thousand days. If you're going to introduce the question of utility for a majority of a language's hackers, Python 3000 will be vaporware for a couple of years. Heck, PHP 5 is barely not vaporware, if you look at installed base among $4.95- a-month virtual hosting plans.

So what's the bottom line for Perl 6, then? Is it here now or isn't it?

While the software isn't finished ... it does exist and has existed for years. We do all of our development in public; we even have a graph of passing specification tests updated daily.

Through Pugs and Rakudo (and other projects -- Perl 6 is a specification which we expect to have multiple compatible implementations), people have been able to and have in fact run real Perl 6 code for over three years. In fact, the Parrot project has released a new stable version of Parrot on the third Tuesday of every month for the past two years. This includes a new stable version of Rakudo, the Perl 6 implementation running on Parrot.

Indeed, their most recent release was this Tuesday. And there you have it, folks!

infoworlds.com

Looking for job security? Try Cobol
As long as there are mainframes, there will be Cobol. Learn the language and the culture and you might land a job that that lasts until retirement

A career as a Cobol programmer might not be as sexy as slinging Java code or scripting in Ruby, but if you buckle down and learn hoary old Cobol, you could land one of the safest, most secure jobs in IT.

Analyst reports indicate that Cobol salaries are on the upswing. The language is easy to learn, there's a healthy demand for the skills, and offshore Cobol programmers are in short supply -- plus, the language itself holds the promise of longevity. All that loose talk about mainframes going away has subsided, and companies committed to big iron need Cobol pros to give them love.

[ To learn about other skills in high-demand during tight times, read "Recession-proof IT jobs." ]

In a troubled economy, with analysts forecasting IT spending slowdowns, secure IT positions could quickly become scarcer than they are today. Seasoned Cobol programmers, in contrast, "should be in pretty good shape job-wise. If they have a position at an organization that intends to keep its legacy Cobol apps, then they are probably set for life," says industry analyst Jeff Gould, director of research at Interop Systems. "Many mainframe customers with large mission-critical Cobol apps are locked into the mainframe platform. Often there is no equivalent packaged app, and it proves to be just too expensive to port the legacy Cobol to newer platforms like Intel or AMD servers."

Why Cobol is alive and well
William Conner, a senior manager in Deloitte's technology integration practice, comments that "salaries for Cobol programmers have been rising in recent years due to a lack of supply. Demand is outstripping supply because many Cobol programmers are reaching retirement age and college leavers tend to focus on Java, XML, and other modern languages."

Deloitte also found that three-fifths of respondents are actually developing new and strategic Cobol-based applications. Yes, right here in 2008.

Retired Cobol programmer William C. Kees, who coded in Cobol for 25 years, says that the language is easy to learn and that he mastered it without taking any classes. Another career Cobol programmer requesting anonymity seconds that sentiment: "It's easy to learn, read, and follow. After looking at code for .Net or VisualBasic, give me Cobol any day. At least it's readable."

What's more, Cobol programmers are not as prone to having their job outsourced, according to Brian Keane, CEO for Dextrys, an outsourcing company based in China and the United States. "The Chinese don't have mainframe experience. Because Chinese computer science graduates have come late to the technology table they are starting with the latest architectures and systems and don't have the experience with legacy languages and systems," he says.

Latin American countries are in a situation similar to that of the United States, according to Gabriel Rozman, executive vice president for emerging markets at Tata Consultancy Services. "Many Latin countries are still stuck with legacy mainframes where Cobol is a common skill," says Rozman, "so that anyone who has [that and] the latest Java skills, for example, would be sought after."

Bridging the old and the new
Mainframes aren't going anywhere mainly because they do an extremely reliable job with high-volume transaction processing. But increasingly, companies are benefiting from integrating legacy mainframe Cobol applications with the rest of the enterprise, to leverage their power and work toward real-time business operations.

SOA, for instance, opens all sorts of opportunities to expose Cobol apps to the wider world. "Many mainframe users are actively pursuing SOA as a way to integrate their legacy Cobol apps with newer nonmainframe apps," explains Jeff Gould of Interop Systems.

infoworld.com
Palm vs. Pocket PC-The Great Debate
Is there a right choice?

From About.com

Talking PDAs is a lot like talking Politics. Everyone has their own opinions and sometimes it's easier to respect those opinions than to argue them. But what makes a person so passionate about their PDA? A lot of it has to with how you use it and how much you rely on it. Power Users sometimes have their lives so interwoven with their PDAs that to lose it or have it break can be downright gut-wrenching. Now don't smirk, I bet the last time your computer crashed, you remembered a few choice words from your college days. If you've ever lost your Day Runner you know what I mean--you thought it was just a binder till everything was gone. The great thing about a PDA is that it's a Day Runner that has the ability to be backed up, saving all of your valuable information.

What truly brings out passionate conversation is the question: What's better: Palm or Pocket PC. Heck, that question alone has resulted in some scenes reminiscent of those infamous Thanksgiving get-togethers (of course you've never had one of those have you?).

The first thing we need to do is clarify some facts versus perceptions.

Some facts are:

* Pocket PC multitasks (you can run several programs at once), Palm is intended to run one program at a time (although Palm OS 5.0 introduced some multitasking ability)
* In 2001, there were over 13,000 commercially available software programs for Palm versus 1,600 for Pocket PC (although the gap is shrinking)
* In 2001 Palm had a market share of 72% while Pocket PC had about 15%
* Palms start at around $99 while Pocket PCs start around $200.

Some (sometimes faulty) user opinions/perceptions are:

* Palm is easier to learn and use
* Palm is more stable, Pocket PC crashes more
* Pocket PC is more powerful
* Pocket PC integrates better with Windows Office
* Palm has more freeware and the software is cheaper
* Palm is an Organizer, Pocket PC is a computer

As you see, the users' opinions are as varied as the users themselves. Palm has been a more popular platform in the past mainly because of the perceived easier learning curve and the price. In fact, until late last year, a Palm would cost you around $200 while a Pocket PC would run about $500. Many people weren't willing to spend $500 to see if they would even use a PDA. In fact, many Pocket PC users will tell you their first PDA was a Palm because of low cost, but they upgraded to Pocket PC because they wanted the Windows feel. Currently both platforms offer PDA's around the $200 range, making it affordable to try either Pocket PC or Palm.

Is the Palm easier to use use than Pocket PC? If you're somewhat computer illiterate, the Palm may be a little easier to use. If you're familiar with computers, than both platforms will probably have the same learning curve. One of the biggest misconceptions of Pocket PC is that it runs regular windows programs. It does not. Programs for Pocket PC are developed for Pocket PC and will not run on Windows computers and vice versa, although many Pocket PC programs were developed from the same source code as their Windows counterpart. The thing that many developers for both Palm and Pocket PC are doing is creating a desktop and PDA version of their product. That way information can be entered on either the PDA or the computer and then synced to the other.

A big part of the debate over which PDA to use is a lot like the Mac vs. PC debate. Many people feel that Mac is easier to use and many even enjoy bucking the "Everyone should use a PC" trend. If you hate Windows and think Bill Gates is the epitome of evil, you'll probably want to stick with Palm. If you love Windows and want the Windows feel, you'll probably want to try Pocket PC. The best way to know is go to your local electronics store and play around with different PDAs. Also do some research online. There are a lot of sites dedicated to handhelds with news and reviews of different PDAs. PDA forums are great if you want to solicit some opinions on the best bet for you (be ready to open Pandora's box!). In reality, both Operating Systems are more similar than some people want to give them credit for. With a few exceptions, they have equal power to help you run your business or your everday life.

One thing to remember is the validity of a PDA user's opinion. While the opinions of others can be beneficial to making a buying decision, be sure to ask questions. Some PDA users will tell you the Operating System they use is the best while the other one sucks. The thing to know is that many of those die-hards have never tried the other Operating System. If you want a true comparison, talk to someone who has used both platforms so that you can understand how the PDA they use might compare to your needs.

Hopefully you weren't looking for the answer to which PDA is better. Only you can answer that question, but hopefully we've given you some things to think about so that you can make an informed buying decision.
What is Ajax?

Building Web Applications Just Got More Fun

By Jennifer Kyrnin, About.com

Web applications are fun to build. They are like the fancy sportscar of Web sites. Web applications allow the designer and developer to get together and solve a problem for their customers that the customers might not have even know they had. That's how the blogging tools like MovableType and Blogger came about after all. I mean, before Blogger, did you know you needed an online tool to build your Web site blog?

But most Web applications are slow and tedious. Even the fastest of them has lots of free time for your customers to go get a coffee, work on their dog training, or (worst of all) head off to a faster Web site. It's that dreaded hourglass! You click a link and the hourglass appears as the Web application consults the server and the server thinks about what it's going to send back to you.
Ajax is Here to Change That

Ajax (sometimes called Asynchronous JavaScript and XML) is a way of programming for the Web that gets rid of the hourglass. Data, content, and design are merged together into a seamless whole. When your customer clicks on something on an Ajax driven application, there is very little lag time. The page simply displays what they're asking for. If you don't believe me, try out Google Maps for a few seconds. Scroll around and watch as the map updates almost before your eyes. There is very little lag and you don't have to wait for pages to refresh or reload.
What is Ajax?

Ajax is a way of developing Web applications that combines:

* XHTML and CSS standards based presentation
* Interaction with the page through the DOM
* Data interchange with XML and XSLT
* Asynchronous data retrieval with XMLHttpRequest
* JavaScript to tie it all together

In the traditional Web application, the interaction between the customer and the server goes like this:

1. Customer accesses Web application
2. Server processes request and sends data to the browser while the customer waits
3. Customer clicks on a link or interacts with the application
4. Server processes request and sends data back to the browser while the customer waits
5. etc....

There is a lot of customer waiting.
Ajax Acts as an Intermediary

The Ajax engine works within the Web browser (through JavaScript and the DOM) to render the Web application and handle any requests that the customer might have of the Web server. The beauty of it is that because the Ajax engine is handling the requests, it can hold most information in the engine itself, while allowing the interaction with the application and the customer to happen asynchronously and independently of any interaction with the server.
Asynchronous

This is the key. In standard Web applications, the interaction between the customer and the server is synchronous. This means that one has to happen after the other. If a customer clicks a link, the request is sent to the server, which then sends the results back.

With Ajax, the JavaScript that is loaded when the page loads handles most of the basic tasks such as data validation and manipulation, as well as display rendering the Ajax engine handles without a trip to the server. At the same time that it is making display changes for the customer, it is sending data back and forth to the server. But the data transfer is not dependent upon actions of the customer.
Ajax is Not New Technology
Ajax is instead a new way of looking at technology that is already mature and stable. If you're designing Web applications right now, why aren't you using Ajax? Your customers will thank you, and frankly, it's just fun!


Asynchronous Javascript and XML

By Stephen Chapman, About.com

Sunday June 19, 2005
One of the biggest limitations on using a client side language like javascript has always been that there was no easy way of interacting with the server without having to reload the whole web page. This problem has been rectified by recent browsers which have gradually implemented ways for Javascript to request information from the server and have that information passed back without having to reload the entire page. This process is done asynchronously (which means that the request is passed but the Javascript doesn't wait for the reply but gets on with something else). When the reply does come it triggers some further Javascript to complete the processing. The biggest problem with this has been that the commands to do this are not identical on each browser. This process of sending a request back to the server and retrieving information without having to load a whole new page has now been given a name - AJAX - and there are a number of projects underway to produce classes that hide the different methods that the various browsers use and give you a common set of calls to provide this function.

AJAX like DHTML is not a new technology as such. Rather it is a combination of existing technologies in order to create a new effect. DHTML combined Javascript with HTML and CSS to create dynamic effects for your web page. AJAX combines Javascript with XML and a server side language (eg. PHP) to provide a way for your web page to actually interact with the server without the overhead of downloading a brand new page every time a simple request is made.

One example of where this could be useful is if you have a new user selecting a logon id. Using Javascript alone the only validation you can perform is to verify that the logon id meets the minimum and maximum lengths that you accept and does not contain any invalid characters. To validate that the selected logon id is not already in use by someone else you need to check the database to see if the id already exists. Javascript by itself can't do this but by using AJAX the Javascript can pass the requested logon id back to the server for checking and receive a reply that can be dynamically displayed on the page (again through Javascript) without having to wait for the form to be submitted and a whole new web page to load.

There are a number of web sites that have information on AJAX and books that mention the subject are just starting to appear in the bookshops.

Description:

Invisible Browsing will mask or hide or spoof your IP address preventing the banner ad campaigns and e-commerce applications from logging your internet address without your permission.
Invisible Browsing is also an efficient Internet Explorer Privacy Protection, Popup Blocking and Internet Sharing Solution.
This software allows you to mask your IP address.
Invisible Browsing will also protect you by blocking ActiveX controls which can harbor malicious code such as viruses and invasive code such as JavaScript.
Invisible Browsing is also an efficient Internet Explorer Privacy Protection and Popup Blocking Solution that automatically erase, in real time, all your online tracks and stops all the annoying popups to be displayed on your screen.
Invisible Browsing allows you to connect your entire home or small office network to the same Internet Connection thus giving you simultaneous access to Web browsing, online gaming, chat, downloading music, films and much more.

Here are some key features of "Invisible Browsing":

· Masks, Hides Your IP Address
· Auto Change Proxy
· Internet Connection Sharing
· Blocks Potentially Harmful Code (ActiveX)
· Blocks Invasive Code (JavaScript)
· Browser Cleaner.
Download :
http://rapidshare.com/files/149438183/InvisibleB67.rar
Pass : appz-http://www.wareznet.net/



Description:

Advanced PC Tweaker offers the results-oriented solutions for Windows users who can tweak the PC system to the optimal performance with the built-in powerful components, allowing you repair problems, clean up drive space, manage backups, optimize system and other advanced toolkits like eliminating privacy tracks, administering startup applications, uninstalling unwanted applications and permanently erasing files.
If you have been frustrated with hassles of solving the massive problems on your PC, then take it for a spin...

- Support all Windows-based Operating Systems
Advanced PC Tweaker is fully compatible with all the major Windows-based systems, including the latest Windows Vista, Windows XP/ 2003/ 2000 and even Windows 98 and ME.

- Advanced PC Tweaker Effectively Solves Problems
Fix PC Errors: Advanced PC Tweaker will repair your registry problems with only several clicks as you can make choice on the listed items detected by the engine which will automatically repair all the specified problems.
In addition to the selection and deselection operations, you can exclude any items to Ignore List where Advanced PC Tweaker will reference and ignore the excluded entries at the next-time scanning; Errors Utility scans and fixes the commonly-known Windows problems, protecting your Windows system from crashing, freezing and blue screen problem;
Advanced PC Tweaker includes BHO (Internet Browser Helper Objects) manager and IE Restore utilities to ensure a better and healthier internet experience by blocking malicious plugins and restore Internet Explorer to sound performing states; Safeguard your computer and solve curious problems by Blocking/Re-Registering ActiveX objects or controls.

- Top-Notch Hardcore Registry Cleaning and Fixing
Advanced PC Tweaker offers several useful functions to prevent your computer from slowing down. Registry Repair contains a total of 17 dedicated categories for your registry scanning and detecting.
By choosing the category you want, the smart scanning engine will find out and list the detected problems that make your PC's performance unstable or insufficient and repair the problems, thus to boost PC performance.

Key Features and Benifits :

- 1-Click Maintenance
- Clean Invalid Registry Entities
- Repair Registry Errors
- Repair Windows system Errors
- Repair/Restore Internet Explorer
- Block ActiveX Objects
- Register ActiveX Objects
- Clean up Junk Files
- Indentify and Clean up Duplicate Files
- Manage Registry Backups
- Utilize System Restore Point
- Tweak Memory Management
- Optimize System Settings
- Set Settings for Optimal Performance
- Allow Perform Scheduled Task
- Remove Tracks of Internet Activities
- Administer Startup Applications
- Uninstall Unneeded Applications
- Permanently Remove Files or Folder
- Submit PC Problems to Solutions Center
- Unlimited Free Technical Support
- Automatically Updated during the Subscribed Period

Download:
Code:

http://rapidshare.com/files/149437930/AdvancedPCT410.rar


Password:
Code:

appz-http://www.wareznet.net/


Description:

Spyware Doctor is an advanced adware and spyware removal program that will detect and clean thousands of potential spyware, ad ware, keyloggers, trojans, spy ware cookies, trackware, spybots and other malware from your computer.

Spyware Doctor is a multi-award winning spyware removal utility that detects, removes and protects your PC from thousands of potential spyware, adware, trojans, keyloggers, spybots and tracking threats.

The Spyware Doctor remover tool allows you to remove, ignore or quarantine identified Spyware for free in the trial version. It also has an OnGuard system to immunize and protect your system against hundreds of privacy threats as you work.

By performing a fast detection at Windows start-up you will be alerted with a list of the potential threats identified.

Here are some key features of "Spyware Doctor":

· Detects and removes malware infections including spyware, adware, browser hijackers, Trojans, keyloggers, dialers and tracking cookies
· Frequent Smart Updates to detect and guard against new infections as well as adding enhancements to Spyware Doctor
· Rootkit scanning
· Spider Scanning Technology (patent pending)
· ADS detection & removal capability
· Malicious KL (Kernel Level) Process killer
· A wide range of other sophisticated scanning tools
· The OnGuard feature, designed for continual protection against malware infections and associated activities on your computer
· Ability to quarantine and restore items that have been detected

What's New in This Release:

· A new OnGuard real-time protection tool, IM Guard, protects users of Instant Messaging applications such as MSN Messenger from access to any potentially malicious URLs received. Such URLs can lead to phishing sites or sites which attempt to exploit your web browser.
· Some recent malware infections are installed as drivers which protect themselves from removal and along with other installed drivers, are protected from deletion by the operating system. However, Spyware Doctor has the capability of removing these malicious driver threats.
Spyware Doctor is also capable of removing threats that attempt to shut down the Spyware Doctor application itself.
· The Ignore List is used to prevent Spyware Doctor from detecting specific items that you wish to keep. However, some recent threats have been found to add themselves to the Ignore List in an attempt to compromise Spyware Doctor?s detection capabilities.
Therefore, the security of the Ignore List has been enhanced and integrity checks implemented to prevent such threats from attempting to compromise Spyware Doctor?s effectiveness.
· Spyware Doctor includes significant enhancements to its scanning tools. The Disk Scanner tool has the ability to detect and remove threats that attempt to avoid detection by using alternative character formatting.
The Browser Activity Scanner is now also able to detect potentially malicious cookies stored by Mozilla Firefox web browser.
· Spyware Doctor provides the option of creating a Windows System Restore point prior to threat removal. This works as a supplementary feature to Spyware Doctor's own Quarantine and Restore features.

Download:
http://rapidshare.com/files/149443405/SpyD60362.rar
Pass : appz-http://www.wareznet.net/



PCMedik is a tool for all that allows you to modify your computers' settings to increase performance and prevent crashes. No modifications to your hardware are made and all adjustments and settings are done in an easy to use interface that a child could use. Most 'other' cpu/computer enhancers promise that they work while you notice no difference at all in performance. PCMedik on the other hand has been tried and tested and proven to work. Are you tired of working on a class paper for school and all of a sudden your computer crashes? Or your running a web server and notice your server keeps crashing? PCMedik not only fixes these errors from occurring it also enhances your computers performance. Speed up your Windows environment with this tool, while fixing errors that are halting your productivity. You no longer have to put up with crashes as if they were normal, PCMedik is medicine for your computer.

PCMedik works on all 32 bit platforms of Windows. From Windows 95 up to the latest Windows XP you will enjoy the benefits that PCMedik will bring. Simply choose your operating system, select the processor type you're using and then choose the repair setting you want and hit the 'GO' button and you will notice reliability and performance that your computer has never seen before. We wont promise that PCMedik will do wonders and make your computer work at astronomical speeds, but you will notice your computer working much smoother and overall performance greatly improved. Another thing you notice is that your computer wont crash often as you did before you used PCMedik.
Download :
http://www.getupload.org/en/file/12463/PcMedik-6-9-8-2008-rar.html




Revenge of the Wii, Part 1

By Walaika Haskins E-Commerce Times Part of the ECT News Network 10/04/08 4:00 AM PT

Nintendo had a breakthrough in the '80s with the NES, then came through with a strong follow-up in the Super Nintendo Entertainment System. However, after that, its strategy to focus on family-friendly games and pass on incorporating the latest technology cost the company its lead. It had to plot a revolution to regain its footing in the sales charts.

In the take-no-prisoners environment of the video game console industry, Nintendo Latest News about Nintendo was not so long ago considered an also-ran. The video game console pioneer and one-time market leader had been outmaneuvered and outsold by video game juggernaut Sony (NYSE: SNE) Latest News about Sony and even upstart entrant Microsoft (Nasdaq: MSFT) Latest News about Microsoft, as their respective PlayStation and Xbox platforms made Nintendo's offerings look like children's toys in comparison.

However, Nintendo has come back strong in the current generation of gaming consoles with the Wii Latest News about Wii. It has outsold Sony's PlayStation 3 Latest News about PlayStation 3 (PS3) and Microsoft's Xbox 360 Latest News about Xbox 360 by a significant margin in every market from the U.S. to Europe to Japan.

At a certain point, success or failure becomes a self-fulfilling prophesy -- and at this point, no one is going to pull ahead of the Wii, said Michael Goodman, an independent gaming and digital media analyst.

"Nintendo is so far ahead, no one is going to catch up to them. It's a battle for second place. The bottom line is that at the start of any generation, it's anybody's game, and Sony really blew it coming out of the gate. They were overpriced. They had no games. It was just a console that no one wanted," he told the E-Commerce Times.

How did Nintendo go from zero to hero in less than a decade? It was, in essence, the decision for the company to stick with what it knew from its 20 years in the business and what it had always done: creating a platform that would appeal to gamers from the very young to the very old.

Leading the Way

Up against Sony and Microsoft consoles that offered gamers a more technologically advanced gaming system and appealed to the core demographic of hardcore gamers -- young males -- Nintendo at one point found itself quickly pushed from first place to third in a market that it had owned since the 1985 U.S. release of its first console, the Nintendo Entertainment System (NES).

Sold for US$199 -- the equivalent of about $400 today -- the bundled NES, along with a competing console from Sega, helped further lay the foundation for the console market created by Atari and its Atari 2600 platform.

The NES offered game play that was much more graphically advanced than earlier systems from Atari. In an inspired move, Nintendo bundled the console with what would become one of the best-selling video game franchises ever, "Super Mario Bros." Consumers responded well to the bundling and eventually purchased nearly 62 million units of the NES worldwide. In fact, the NES is credited with helping to revitalize the flagging video game industry in the U.S. after the industry suffered a crash in 1983.

Nintendo's follow-up, 1990's Super Nintendo Entertainment System (SNES), was another hit, selling more than 49 million units.

A leader in the so-called golden age of video games, the SNES marked the start of Nintendo's focus on game play rather than graphics and other technical wizardry.

This concentration on the game itself rather than the creation of a console with all the latest technological bells and whistles would become the company's guiding ethos in future generations. That philosophy perhaps also contributed to its loss of the alpha position among gaming consoles.
Start of the Slide

Released in September of 1996, the Nintendo 64 (N64), which contained a 64-bit graphics processing unit (GPU), was part of the fifth generation of video game consoles. Its cohorts were Sega's Saturn, launched in May of 1995, and Sony's PlayStation (PS) platform, released in December of 1994.

Technologically, the N64 trailed the Saturn and PS -- it used cartridges, whereas the other two consoles stored games on compact discs. However, that did not stop the console from selling some 500,000 units in the first four months on its way to total worldwide sales of just fewer than 33 million consoles.

Game titles including "Donkey Kong Country" and "Super Mario 64," along with the first controller to include an analog stick, helped sales. However, it was Sony that came out on top in this generation; PS sales topped 102 million units.

The start of Nintendo's slide from market leader is attributed in part to the console's continued use of ROM cartridges, which were sometimes a hindrance to game play. CDs offered much more memory and were less expensive.

The irony for Nintendo is that the PS was born from a defunct partnership between Nintendo and Sony, which sought to create an add-on CD accessory for the SNES.

The company's missteps continued into another generation. Released in November 2001, the Nintendo GameCube Latest News about GameCube sold a scant 21.74 million units worldwide. Its rivals were the PlayStation 2 Latest News about PlayStation 2 (PS2), launched in October of 2000, and the Xbox, released in November of 2001. In this class, Sony had the clear winner: The PS2 has sold some 140 million units to date, and it's still in stores. The Xbox sold 24 million units by May 2006.

With the GameCube, Nintendo had finally moved from cartridges to discs. However, its competition had moved on as well. GameCube titles were stored on 8-centimeter optical discs; PS2 and Xbox games came on standard-sized DVD-ROM discs. Adding to the GameCube's marketing Learn how you can enhance your email marketing program today. Free Trial - Click Here. woes was the "family-friendly" strategy. Launch titles included "Luigi's Mansion," "Super Monkey Ball," and "Disney's Tarzan Untamed," none of which resonated greatly with the hardcore gamers attracted to Sony and Microsoft.

The rise of first-person shooters and the breakthrough "Grand Theft Auto III" led gamers to forgo the GameCube in favor of the PS2, Sega Dreamcast and Xbox, which had built-in online capabilities.
Designing a Revolution

However, by the time the GameCube hit store shelves in 2001, Nintendo had already begun development on the Wii, originally codenamed "Revolution."

"They rethought the system and took it back to more of what the original Atari focused on -- games that are shared or play well in living rooms rather than bedrooms," Rob Enderle, principal analyst at the Enderle Group, told the E-Commerce Times.

"The original name for the Wii was the 'Revolution' because Nintendo intended to do something revolutionary: appeal to all, instead of focusing solely on the hardcore," Michael Pachter, a Wedbush Morgan analyst, told the E-Commerce Times.

However, "Wii" reflected the company's goal to build a system that would attract a variety of consumers. The two lowercase i's are meant to visually represent two people standing side by side playing the game. Easily remembered, it's also easy for people to say, no matter their native language.

"The PS3 and the Wii are going after everyone, but the key difference at the outset was price, with the Wii launching at $250 and the PS3 at $600. That was too great a difference to attract any but the most hardcore fans. The Wii gained an advantage because of its differentiated controller and hasn't looked back. The number of games is not as much of a differentiator as price or 'fun factor,'" Pachter said.

Stay tuned for "Revenge of the Wii, Part 2"

Source : technewsworld.com

Microsoft to Launch Windows Cloud

October 2, 2008 -- (WEB HOST INDUSTRY REVIEW) -- According to several reports on Thursday, Microsoft's (microsoft.com) CEO Steve Ballmer revealed the company may soon be launching a cloud version of its Windows operating system, unofficially named "Windows Cloud," later this month.

Speaking yesterday at a Software Plus Services event in London, UK, Ballmer said the OS would be aimed at developers writing cloud computing applications and more details would be provided at the Professional Developers Conference in Los Angeles near the end of October.

"We need a new operating system designed for the cloud and we will introduce one in about four weeks, we'll even have a name to give you by then. But let's just call it for the purposes of today 'Windows Cloud'," said Ballmer, according to reports by The Register. "Just like Windows Server looked a lot like Windows but with new properties, new characteristics and new features, so will Windows Cloud look a lot like Windows Server."

Although extensive details weren't shared, Ballmer hinted at some of the features that would be built into the new OS including geo-replication techniques (a way to replicate data across many physical servers in different locations), management modeling and a service-oriented architecture model.

According to reports on My Broadband, Ballmer reportedly also said that Windows Cloud would be separate from Windows 7, the OS Microsoft is developing to succeed Windows Vista, and the company wasn't expecting to completely move its Office productivity suite online, even though there would likely be a "lite" version of the software available online.

Cloud computing is a concept that has stirred up a lot of buzz over the last year, and despite some criticisms about its consistent reliability or readiness for the enterprise market, many hosting providers have been launching their own cloud-based services to stay ahead of the game and remain competitive. Certainly, technology giants like Microsoft, Google and Amazon have been heavily playing in this space as well.

"What we are really witnessing here is the transformation of an industry, and Microsoft is trying to play catch-up with everyone else," writes John Brandon for ComputerWorld. "They have a corner on server software, productivity software, and the desktop OS but are not clear market leaders in the cloud. That distinction belongs to Amazon, Google, and companies like 3Tera. I'm not exactly sure what Ballmer is hinting at, but it reminds me of the slip Bill Gates made recently in prematurely announcing Windows 7. Windows Cloud, which is a name that Ballmer seems to have made up on the spot, will use geo-replication techniques and compete directly, it seems, with Amazon EC2 and Google App Engine."

Earlier this week, Amazon said its rival Elastic Computer Cloud service would run on Windows Server and the SQL Server database beginning this fall.


Google Data Centers Most Efficient

By Liam Eagle, theWHIR.com

October 3, 2008 -- (WEB HOST INDUSTRY REVIEW) -- In a posting to its blog made earlier this week, Google described some advances it has made in data center efficiency, calling its data centers "the most efficient in the world."

The post, made Wednesday and attributed to Urs Hölzle, senior vice president of operations, includes a chart that shows Google's servers and data centers using considerably less electrical power than "typical" data centers and servers.















The chart shows Google's energy use in comparison to typical servers and data centers.

"We achieved this milestone by significantly reducing the amount of energy needed for the data center facility overhead," writes Hölzle, in the post. "Specifically, Google-designed data centers use nearly five times less energy than conventional facilities to feed and cool the computers inside. Our engineers worked hard to optimize every element in the data center, from the chip to the cooling tower."

The blog post links to a data center efficiency section on the Google site, in which the company describes in much greater detail its plans and practices for energy efficiency, walking the reader through a five-step plan that includes efficient servers, efficient data centers, water management, server retirement and "an efficient future."

On the data center site, the company points out the strategic advantage of sustainability, saying, "most of our work is focused on saving resources such as electricity and water and, more often than not, we find that these actions lead to reduced operating costs. Being 'green' is essential to keeping our business competitive. It is this economic advantage that makes our efforts truly sustainable."

Google's green ambitions go well beyond lip service, and even beyond the deep, involved effort in building its own facilities to be energy efficient. In another Wednesday blog post, Google announced its "clean energy 2030 proposal," a plan designed to wean the US off power produced by coal and oil by the year 2030


AppRiver Spots September Virus Trends


By David Hamilton, theWHIR.com


(WEB HOST INDUSTRY REVIEW) -- Although virus activity tapered off in September, sophisticated phishing, Storm Worm and image spam attacks have contributed to the problems plaguing email according to email security provider AppRiver (appriver.com).

Why use a vendor when you can choose a partner? DataPipe delivers highly customized solutions to meet your unique IT needs. World-class data centers in the U.S., London & China. DataPipe - Personal Touch, Global Reach.

While the total volume of spam emails fell in September to less than 11 billion messages, a three percent decrease from the previous month, more advanced spam and malware led to more focused and effective campaigns.

AppRiver reported that the most notable phishing campaigns featured the US Presidential election, Hurricanes Gustav, Hannah, and Ike, and a fictitious nuclear meltdown in Canada. These attention-grabbing stories link to video that redirects them via embedded in Flash (.swf) files hosted on legitimate domains in order to direct victims to their spam sites.

First discovered in January 2007, a Storm Worm, a backdoor Trojan horse, made an appearance on September 16, offering PC users a chance to play a game designed for the Apple iPhone on their desktops called "Penguin Panic," infecting their computer with the Asprox botnet.

In analysing attachment spam by file type and frequency, AppRiver found that image spam, where the message text of the spam is presented as a picture, has been gaining popularity over the past months. Image spam circumvents traditional spam filtering because it does not contain standard text and most email clients will render image files by default, presenting the message image directly to the user. September levels of image spam were more than one-and-a-half times higher than in the previous month.

The US, Turkey and Russia topped the top ten spam countries of origin list in September, and the UK returned to the top ten list, while the Republic of Korea made its first appearance in ninth position.

Spam originating in Asia fell nearly 10 percent from August. However, Europe's spam increased seven percent, making it the hottest spamming region and there were also increases in North America. Africa showed an explosion in spam transmission, jumping from 12.2 million messages in August to more than 300 million in September

Red Hat undercuts Microsoft on high-performance OS pricing


Linux HPC stack adds device drivers, cluster management tools and more

By Elizabeth Montalbano

October 2, 2008 (IDG News Service) Red Hat Inc. Thursday released a Linux software stack for compute-intensive IT environments that it said costs less than Microsoft Corp.'s price for its comparable Windows offering.

Red Hat charges a subscription of $249 (U.S.) per node, or server, per year for Red Hat HPC Solution, a new offering that combines Red Hat Enterprise Linux with Platform Open Cluster Stack 5, clustering software it has licensed from Platform Computing.

Red Hat HPC Solution also includes device drivers, a cluster installer, cluster management tools, a resource and application monitor, interconnect support and a job scheduler. The yearly subscription also includes ongoing technical support, bug fixes and any future software updates, said product marketing manager Gerry Riveros.

Comparably, Microsoft's Windows HPC Server 2008, which also combines the OS with components needed for clustering and managing the HPC environment, costs a one-time fee of $475 per node, which, on the surface, seems less expensive than Red Hat's annual price tag.

However, to get maintenance and software updates, Microsoft requires that enterprise customers purchase an Enterprise Assurance (EA) maintenance agreement for three years. Though Microsoft will not disclose publicly what those agreements cost, those familiar with them said they typically cost about 25% to 29% of the price of the product.

Factoring in the cost of the EA, the cost of one Windows HPC 2008 Server over three years would be more than $800, while a comparable HPC offering from Red Hat is about $750.

Through its public relations firm, Microsoft Thursday declined to provide estimates about how much Windows HPC Server 2008 will cost over three years for customers beyond the per-node pricing, saying that information will be available on its Web site on Nov. 1.

Customers who require HPC environments perform tasks such as data modeling or complex, computer-generated simulation that require complex computational power.

Before Thursday, Red Hat offered only a version of Red Hat Enterprise Linux for HPC environments; customers have had to assemble other components of the software needed to build a full-scale HPC environment themselves, Riveros said.

He acknowledged that pressure from Microsoft in the HPC market inspired the company to sell an all-in-one offering for a competitive price. Customers, too, asked Red Hat to combine the components with the OS to make for easier deployment and management, he said.

"We wanted to remove the chief roadblocks, the whole hassle of trying to put it together themselves," Riveros said.

Linux is the OS most used for HPC environments and has been dominant in the market for some time. However, over the past several years, Microsoft -- a relative newcomer to the space -- has stepped up its efforts because the company wants people to use Windows in that market.


Source : computerworld.com

Grand jury indicts two Europeans over denial-of-service attacks in 2003


DDOS indictments come four years after two U.S. residents were charged in same attacks

By Jeremy Kirk


October 3, 2008 (IDG News Service) A federal grand jury in Los Angeles has indicted two European men for allegedly orchestrating distributed denial-of-service (DDOS) attacks against a pair of U.S.-based Web sites in 2003.

The U.S. Department of Justice announced the indictments yesterday. Two U.S. residents were charged in connection with the same attacks in 2004, in what the DOJ describes as its first successful investigation of large-scale DDOS attacks waged against Web sites for commercial purposes.

The men indicted yesterday face up to 15 years in prison if convicted of charges of conspiracy and intentionally damaging a computer system, according to the DOJ. One of them, a 25-year-old German named Axel Gembe, is believed to be the programmer behind Agobot, a well-known malware program used to create botnets of compromised PCs.

Charged along with Gembe was Lee Graham Walker, a 24-year-old from England. The two were allegedly hired to carry out DDOS attacks by Jay R. Echouafni, who was the owner of Orbit Communications Corp., which sold home satellite systems. The DOJ said that the attacks targeted the Web sites of two of Orbit's competitors, Miami-based Rapid Satellite and Los Angeles-based Weaknees.

The attacks halted Weaknees' business for two weeks in October 2003, causing the company $200,000 in losses, the DOJ said, adding that Rapid Satellite also suffered business losses as a result of the attacks.

Echouafni, a Moroccan native who also uses the first name Saad, was one of the men charged in 2004; he remains at large and may have fled to Morocco, according to the FBI. The second man charged then, Paul Ashley, who prosecutors describe as one of Echouafni's business associates, pleaded guilty and has already completed a two-year prison sentence for his role in the conspiracy.

The new indictments allege that Echouafni ordered Ashley to block access to the rival Web sites, and that Ashley in turn asked Walker "and others" to launch DDOS attacks against the sites. Walker allegedly used a botnet that he created along with Gembe to carry out the attacks. According to the indictment, the two communicated via Internet Relay Chat to discuss ways to make the code behind the botnet more powerful and damaging to Web sites.

As part of the attacks, computers in the botnet were allegedly used to send a flood of syn — short for synchronization — data packets to both Web sites. Syn packets initiate communication between two computers, but they can be configured with false information and then sent in an overwhelming stream to jam up the receiving server. The DOJ said that Gembe's botnet could also direct large amounts of HTTP traffic toward a Web site, which has the same damaging effect.

Source : computerworld.com

Microsoft fights Ballmer testimony in 'Vista Capable' suit


By Gregg Keizer

October 4, 2008 (Computerworld) The only thing CEO Steve Ballmer knew about Microsoft's Windows Vista Capable marketing campaign was what he was told by subordinates, and he should not have to testify in the class-action lawsuit that accuses the firm of deceiving customers, the company said Friday.

In a motion filed Friday, Microsoft asked U.S. District Court Judge Marsha Pechman to block the move by plaintiffs' attorneys to depose Ballmer later this month. Lawyers for the plaintiffs want Ballmer on record in the case, which charges Microsoft duped consumers when it touted then-current PCs as "Vista Capable" in the months leading up to the late-2006 launch of the new operating system.

A Microsoft spokesman said the opposing lawyers were grandstanding. "This unnecessary request to depose Steve Ballmer is part of an effort by class action lawyers to generate media interest in topics that fall outside the narrow theory the court allowed them to pursue in this lawsuit in February," said company spokesman David Bowermaster.

In a declaration submitted to Pechman, Stephen Rummage, an attorney with Davis Wright Tremaine LLP, which is representing Microsoft in the case, said that he told plaintiffs' attorneys that Ballmer would not be available for a deposition before the Nov. 14 cut-off. "After briefly describing our understanding of the law, I told [plaintiffs' attorneys] that Mr. Ballmer had no unique or superior personal knowledge of any disputed facts, asked them to explain why they thought Mr. Ballmer's testimony was necessary, and requested that Plaintiffs rethink their request to depose Mr. Ballmer," Rummage told the judge.

Ballmer echoed that in his own declaration, also filed Friday. "I was not involved in any of the operational decisions about the Windows Vista Capable program," he said. "I was not involved in establishing the requirements computers must satisfy to qualify for the Windows Vista Capable program. I was not involved in formulating any marketing strategy or any public messaging surrounding the Windows Vista Capable program.

"To the best of my recollection, I do not have any unique knowledge of nor did I have any unique involvement in any decisions regarding the Windows Vista Capable program," Ballmer added.

All he knew about Vista Capable was what subordinates, particularly Jim Allchin, the Windows development chief who took Vista to market before retiring in early 2007, and Will Poole, the former senior vice president responsible for the client version of Windows, told him. Ballmer admitted having had only "brief discussions about technical requirements and timing" for the marketing effort with high-level executives at partners such as Intel Corp.

Microsoft's Bowermaster declined to comment late Friday when asked why Ballmer was not more involved in the Vista Capable marketing campaign. At the time of Vista's debut in November 2006, Microsoft touted the unveiling as its "biggest ever in the history of Microsoft product launches."

Source : computerworld.com
The Norton Removal Tool uninstalls all Norton 2009/2008/2007/2006/2005/2004/2003 products from your computer. Before you continue, make sure that you have the installation CDs or downloaded installation files for any Norton products that you want to reinstall. Also, if you use ACT! or WinFAX, back up those databases and uninstall those products.

Windows 98 and ME users should download this version.

Features:
Norton Removal Tool is a program that can remove some Norton software from your computer. Norton Removal Tool runs on Windows 2000/XP/Vista. Norton Removal Tool should be used only if you have tried to uninstall the Norton program using Windows Add/Remove Programs and that did not work.
Download :
http://www.getupload.org/en/file/12554/Norton-Removal-Tool-2009-0-0-37-rar.html