Sunday 15 September 2013

Looking To Twitter To Reignite Tech I.P.O.’s


Beginning in 2011, the market for technology initial public offerings exploded with a number of blockbuster offerings — but went quiet soon after Facebook’s $16 billion stock sale last year.

Now investors and deal makers are hoping that Twitter’s coming stock sale will help the once-soaring technology sector take flight again.

The New York TimesCompanies with less than $1 billion in revenue, like Twitter, may start the I.P.O. process in secret.

Companies with less than $1 billion in revenue, like Twitter, may start the I.P.O. process in secret.Analysts have estimated that Twitter, the social network, could be valued at more than $10 billion and raise hundreds of millions of dollars, making it the biggest technology I.P.O. since Facebook. That would be manna to a landscape that has been a bit barren lately, although some changes in the technology sector are likely to temper any broad expansion of new stock sales.

About 22 technology deals have priced in 2013, about 17 percent of all I.P.O.’s this year, according to data from Renaissance Capital. That is the lowest percentage of total initial stock sales since 2008, when the industry represented just 10 percent of all deals. By contrast, technology offerings made up 35 percent of new stock sales in 2011 and 30 percent last year.

The trend runs counter to an overall increase in the number of offerings: 132 I.P.O.’s priced this year, up 45 percent from a year ago. Renaissance Capital predicts at least 200 companies will go public by Dec. 31, making 2013 the busiest year for new stock sales since the financial crisis.

Some of that decline for technology deals occurred in the wake of Facebook’s botched offering, which was marred by technical errors that dented the overall market for I.P.O.’s for weeks. And turmoil in the markets this summer, including a drop of nearly 3 percent in the Standard & Poor’s 500-stock index in June, were faulted for a few stock sales that fell short of expectations.

Yet businesses can also afford to be more patient, biding their time before becoming public companies.
“I wouldn’t characterize it as companies not needing to go public, but they don’t feel a rush to go public,” said Cully Davis, the head of technology initial public offerings at Credit Suisse.

The quieting of the technology space is tied in part to changes in the business landscape. One is the passage last year of the JOBS Act, which allows companies with less than $1 billion in revenue to begin the I.P.O. process in secret. That has helped mask the number of would-be debutantes exploring stock offerings.
Nearly every company that qualifies for the JOBS Act has taken advantage, according to PricewaterhouseCoopers. Though it announced its offering plans in a 135-character posting last Thursday, Twitter has availed itself of the law to avoid actually disclosing any specific financial information for now.Another factor is that companies have been weighing their options more cautiously. Would-be I.P.O. candidates have other ways to raise money these days, including private stock sales that let shareholders divest themselves of their holdings.

“Small companies, particularly in the technology area, are deciding that being an independent public company is not the profit-maximizing strategy,” said Jay R. Ritter, a professor of finance at the University of Florida in Gainesville who studies initial public offerings.

For instance, SurveyMonkey raised $444 million late last year from an array of investment firms, in a move that let employees and early investors cash out. It also collected $350 million in new debt financing lined up by JPMorgan Chase.

“This transaction affords us all of the capital benefits of a public offering without the costs and distractions of an I.P.O. and the demands of operating as a public company,” Dave Goldberg, chief executive of SurveyMonkey, the Internet survey company, said in announcing the capital-raising.

Some companies that may go public are also exploring selling themselves, particularly if technology giants like Cisco and Google are willing to pay top dollar. Depending on the bent of existing backers, the promises of a quick payday might trump the possibility of a bigger valuation later.

This year alone, start-ups like Waze, the maker of a popular map application, and ExactTarget, a marketing software company, sold themselves to Google and Salesforce.com instead of pursuing an I.P.O.
Still, one banker said that a majority of companies that weigh a stock offering or a sale ultimately elect to go public.

For all the attractions of staying private, many technology companies still view an initial public offering as a milestone, serving as a branding event and an avenue for further growth.

Wall Street expects offerings from a more companies across all areas of the industry. So far, many have hailed from a few sectors, including enterprise software, advertising technology and communications.
Consumer-focused businesses, which dominated headlines in 2011 and 2012, have made up a small but growing percentage.

Besides Twitter, companies that are on deck for a public offering in the next few months include Chegg, a start-up that rents textbooks to students; FireEye, a cybersecurity company; and Covisint, a cloud computing services provider.

“The pipeline spans across all areas of tech,” said Chet Bozdog, a co-head of technology, media and telecommunications investment banking at Bank of America Merrill Lynch. “There’s a diverse pool of I.P.O.’s coming.”

Analysts and deal makers say investors remain hungry for new offerings despite the occasional big misstep, like those of Groupon and Zynga, both still well below their debut prices.

The most successful companies, these people say, combine high growth rates with a clear map to profitability — an actual profit is optional, at least at first — and a defensible business plan.Tableau Software, a maker of business intelligence software, has more than doubled its stock price since going public in late May, closing on Friday at $72.70. And stock of Marketo, a cloud-based marketing company, has surged from an I.P.O. price of $13 a share to $34.42 as of Friday.

The model for successful offerings is LinkedIn, the social network whose 2011 market debut enjoyed a pop that others have long envied. LinkedIn excited investors with promises of strong profit and revenue growth and teased them by selling just 8 percent of its total shares, increasing demand.


Since then, it has reported steadily climbing revenue, and has been rewarded with a valuation that has climbed to more than $32 billion from $4.3 billion.

Tuesday 3 September 2013

A Samsung smart watch and smartphones with bigger screens expected in Berlin


A smart watch from Samsung Electronics, an Acer smartphone that can shoot 4K video and a Sony one with a 20-megapixel sensor, plus a plethora of tablets and TVs are all expected at this year's IFA consumer electronics show.

Consumer electronics manufacturers from all over the world are once again heading to Berlin for the show, which doesn't open its doors to the public until Friday. By then, though, most of the new products will have already been announced at news conferences on Wednesday and Thursday.

Pebble E-Paper Watch for iPhone and Android Samsung is expected to launch a smart watch, the voice-controlled Galaxy Gear, rumoredto allow users to keep track of calls, messages and social networks. The device will also have calorie and pulse monitors, and apps that take advantage of those features, according to media reports. The smart watch segment is being increasingly hyped, but expectations should in this case be tempered by the fact that the Galaxy Gear is a first generation device, and few vendors get everything right the first time.

The Korean company is expected to launch a new Galaxy Note too. When it launched the first phone-and-stylus combination back at IFA in 2011, the form factor had its detractors. But Samsung has managed to create a new product category, in which it now faces stiff competition from the likes of Sony and LG Electronics.

The Note's screen size has increased from 5.3 inches to 5.5 inches and is expected to be 5.7 inches on the Galaxy Note 3, with a 1,920 x 1,080 pixel resolution. The Note 3 will have a 13-megapixel camera and Samsung is also expected to stick with a MicroSD card slot, in addition to the 32GB or 64GB of integrated storage.

The device will be powered by either a Qualcomm Snapdragon 800 or Samsung's own Exynos 5 Octa processor. The company demonstrated a new model of the latter in July. The Exynos 5420 has four Cortex-A15 processors running at 1.8GHz and four additional Cortex-A7 cores at 1.3 GHz. It also has a six-core Mali-T628 GPU for improved graphics performance. Compared to its predecessor, the Exynos 5420 will also be more power efficient, according to Samsung. At the time, Samsung said the processor was scheduled for mass production in August.

Any large-screened new Samsung smartphone will probably have to duke it out with the Xperia Z1 Sony is expected to launch at IFA. In an effort to differentiate its new flagship from the competition it looks like Sony is taking a note from Nokia's play book by focusing on the camera, which will have a 20-megapixel sensor. The smartphone's specification is also rumored to include a 5-inch full HD screen and a quad-core Snapdragon 800 processor.

Some smartphone and tablet vendors didn't want to get drowned out at IFA, so they have already announced products ahead of the show.

From LG Electronics comes the G Pad 8.3 tablet, which has an 8.3-inch screen with a 1,920 x 1,200 pixel resolution. It is powered by a quad-core Snapdragon 600 Processor running at 1.7GHz and has inherited some features from LG's recently announced G2 smartphone, including the ability for users to knock on the screen to turn on the device. With a feature called QSlide, users will be able to control "up to three different apps in one window with no interruption."

The tablet will be rolled out in North America, Europe and Asia as well as other regions starting next quarter. Prices will be announced at launch time, according to LG.

Archos also wants a bigger piece of the tablet market and will show a number of new products in Berlin. Android-based tablets in its Platinum range will be made out of aluminium and have quad-core processors and screens with resolutions of up to 2,048 x 1,536 pixels. The tablets in the new ChildPad range feature a user interface designed for children, parental controls as well as a filtered version of the Google Play app store.

The rapid growth of the tablet market has left the PC sector struggling. Vendors are looking for new ways to lure consumers into buying a PC as well as a tablet.

Acer has announced the DA241HL, an Android-based all-in-one PC that has a 24-inch Full HD touchscreen and is powered by a Nvidia Tegra 3 quad-core processor. Via an HDMI connector it can also double as a display for a Windows 8-based laptop or desktop computer. The DA241HL will be available mid-October and cost from €429 (US$570).

LG, on the other hand, is hoping a screen with a 21:9 aspect ratio will help. Consumer interest in the format has increased since LG launched its first monitors last year, it said. Last week the company unveiled the V960 all-in-one PC, which has a 29-inch, 21:9 screen with picture-in-picture functionality, allowing users to browse the Internet while watching TV, the company said. LG didn't announce any details on when it will ship or what it will cost.

The TV sector is another part of the consumer electronics industry that has been struggling in the last couple of years. The addition of 3D has largely been a failure, so TV manufacturers have instead set their sights on 4K resolution sets, which have a 3,840 x 2,160 pixel resolution.

They face two main challenges -- lack of content and getting the price down to something a majority of consumers can afford. Recently, Samsung and Sony both dropped the cost of their 4K products, in Sony's case to around $4,000 for a 55-inch model.

Just like last year, all the major vendors are expected to show new 4K TVs, although it remains to been seen whether they have cheaper models in store. Rumors ahead of the show are mostly about 4K products that are out of reach for most consumers, including a 98-inch TV from Samsung.

One way to get around the shortage of 4K video content is to allow users to create their own, which is what Acer's Liquid S2 does. The device will be unveiled at IFA, and is the first smartphone capable of recording 4K video. The device has a 6-inch full HD screen and is also powered by 2.2GHz quad-core Snapdragon 800 processor. The Liquid S2 version will be available at the end of October. Pricing was not announced.

Sony Xperia Zl C6502 White (Factory Unlocked) 13mp *Snapdragon S4 Pro* 5" Hd, 3g Fast Shipping All the World By Fedex Sony's Xperia Z1 is also rumored to be capable of 4K video recording.

IFA opens in Berlin on Friday and will continue through Sept. 11.

Acer experiments with large-screen Android PC to beat sales drop


With PC sales dropping alarmingly, what will come after the Windows platform? With its new DA241HL model, Acer appears to think that a small part of the answer could be, of all things, Google’s Android.

Volume vendors have been experimenting with Android for some time around the fringes, but the appearance of Acer’s second 24-inch all-in-one (AIO) Android 4.2 touch PC at this week’s IFA Show in Berlin is still a striking development.

Acer Aspire S7-191-6400 11.6-Inch(1920 x 1080) Touchscreen Ultrabook Intel Core™ i5-3317U / 4GB DDR3/ 128GB SSD / Windows 8 (Silver) Based around Nvidia’s quad-core Tegra 3 processor, the unit lacks a keyboard (although one can be added, which Acer might or might not make available as an accessory). Lacking that, it is basically a giant tablet as its Tegra architecture might suggest.

This does raise the question of who and what it is for.  One answer is that it is a display unit for smaller Android tablets and smartphones which it can be connected to using MHL (Mobile High-definition Link technology. Given that the price being quoted for the AIO in advance of its official release is around 429 euros (approximately £360) that looks like a possibility.

The second possibility is that it is simply Android 4.2 on a larger screen, running the same Play store apps as any tablet. However, how many of those apps – designed to fit on smaller screens – will look good on a 24-inch screen (with 20 to 75 degrees tilt) is not clear.

Alternatively, the MHL allows the DA241HL to connect to a conventional Windows 8 PC via the HDMI or USB ports and used as a 2-touch display.

In echoes of Google’s other PC operating system, the Chrome OS, the system also supports up to five separate users from the same machine, allowing each to access their own desktop and apps.

Last week IDC predicted that PC sales would drop by 9.7 percent globally in 2013, and one reason cited is that consumers have grown tired of the one-size-fits-all model espoused by Windows 8. Windows would not revive until prices drop to Android-like levels. PC makers appear to agree; experimentation with new designs could be a feature of the IFA Show for some time to come.

Recently Acer's been feeling some of this pain, reporting year-on-year PC sales that were a stinging 32 percent lower in the second quarter of 2013.

Verizon to buy out Vodafone's stake in mobile unit for $130B


Verizon Communications has reached an agreement to buy Vodafone Group's 45 percent stake in its Verizon Wireless subsidiary for US$130 billion.

Under the deal, Verizon will take 100 percent ownership of the wireless unit, the largest mobile operator in the U.S. This will enhance its ability to offer customers "seamless and integrated services," the carrier said in a press release.

Samsung Galaxy S4, Black (Verizon Wireless) The transaction has been unanimously approved by the boards of both companies and is expected to close in the first quarter of 2014, subject to customary regulatory approvals. Verizon will pay a combination of cash and stock for Vodafone's stake.

"As a wholly owned entity, Verizon Wireless will be better equipped to take advantage of the changing competitive dynamics in the market and capitalize on the continuing evolution of consumer demand for wireless, video and broadband services," Verizon Chairman and CEO Lowell McAdam said in the press release.

"This transaction allows both Vodafone and Verizon to execute on their long-term strategic objectives," Vodafone Group CEO Vittorio Colao said in the release. "Our two companies have had a long and successful partnership and have grown Verizon Wireless into a market leader with great momentum. We wish Lowell and the Verizon team continuing success over the years ahead."

Verizon has sought to buy out its wireless business, originally formed as a joint venture with Vodafone, for several years. The transaction is unlikely to have a significant impact on U.S. mobile consumers, industry analysts said last week. Vodafone may use the huge windfall to buy smaller carriers and further its pursuit of wireline operations, analysts said.

Yahoo's China page closes following gradual phase out


After seeing its popularity decline, Yahoo's Internet portal in China has formally closed down, in a sign that e-commerce giant Alibaba Group is transitioning away from the brand.

The portal went offline on Sunday. Its closure is rooted in an agreement Yahoo made last year with Alibaba Group, which has control over the Yahoo brand in the country.

For years now, Alibaba has operated Yahoo's China business as part of $1 billion deal investment from the U.S. company made back in 2005. In exchange, Yahoo acquired a 40 percent stake in Alibaba.

But last year, Yahoo agreed to sell part of that stake back to the Chinese e-commerce company, following ongoing disagreements between the two Internet giants. The share buy-back resulted in Yahoo granting Alibaba "a transitional license" to continue operating its brand for up to four years.

Since then, Alibaba has been phasing out Yahoo products. In December, Yahoo's music service in China went down. Then earlier this year, Yahoo's China site announced the closure of its email service, which formally went offline last month.

Sunday's shutdown of Yahoo's Chinese portal is the result of a strategy adjustment, the site's team said in an Internet posting. The portal, at cn.yahoo.com, now reroutes to an Alibaba site promoting public welfare projects.

Alibaba declined to elaborate on the site's closure. Yahoo had no immediate comment.

The popularity of the Yahoo portal site has gradually waned over the years, as the influence of Chinese Internet companies has only grown. In May, the site ranked as the tenth most-visited Internet portal in the country, according to CR-Nielsen, an Internet research company.

Alibaba likely has no more use for the Yahoo brand, considering that the company is focused on e-commerce, and not media portal sites, said Li Zhi, an analyst with Beijing-based research firm Analysys International.

"China Yahoo has been under Alibaba for many years. Its most valuable properties have been dismembered and used," she said, pointing to how Alibaba had originally wanted access to Yahoo's search technologies. In 2009, however, Yahoo decided to use Microsoft Bing to power its searches.

"Alibaba already has no need for a China Yahoo that's been squeezed dry," Li added.

Labels: Purchase Structured Settlements,Mesothelioma Lawyers San Diego,Secured Loan Calculator,Structured Settlement Investments,Endowment Selling,Mesothelioma Patients,Mesothelioma attorney san diego,Austin Texas dwi lawyers,New York Mesothelioma Lawyers,Phoenix dui lawyers,Secured Loans,Insurance Auto,Phoenix dui attorney,car free insurance online quote,students debt consolidation loans,Pennsylvania mesothelioma lawyers,data recovery Denver,adverse credit remortgages,bad credit remortgages,data recovery service los angeles

Artist develops gesture-based payment technology


Artist Heidi Hinder has developed wearable technology that allows users to exchange money through physical gestures including handshakes, hugs and tap dances.

The craft-based project, dubbed ‘Money No Object’, relies on RFID chips worn by the buyer and the seller in rings or gloves to complete monetary transactions.

The artist developed the idea in Bristol through the Watershed Craft and Residencies Programme and worked with Pervasive Media Studio’s technology team in order to turn basic human interactions into monetary payments.

“My main aim was that any technology I incorporated should unite people, bringing them closer together by triggering some form of physical or emotional exchange between users,” writes Hinder in her research report on the Watershed website.


“I hoped that the crafted objects would not only raise questions conceptually about money and value, but also facilitate meaningful or thought-provoking human-to-human interactions, or sensory experiences, mediated by an appropriate form of digital technology, and embedded within a tactile, appealing and intriguing object, or series of objects,” she continues.

Hinder believes that her project could be used as a reinvented replacement for the clear plastic donation box.

She suggests on her website that visitors could buy a piece of wearable technology from a gift shop and load it with credit, before using it to make purchases “gaining some alternative emotional value to their payment transaction”.

Hinder is currently looking for investment in the idea to help her continue with the research and trial it in a museum or gallery.

Wearable technology is gaining an increasing amount of interest as a range of technology manufacturers aim to bring new products to market, including Google Glass and the Samsung Galaxy Gear smartwatch.


Labels: Purchase Structured Settlements,Mesothelioma Lawyers San Diego,Secured Loan Calculator,Structured Settlement
Investments,Endowment Selling,Mesothelioma Patients,Mesothelioma attorney san diego,Austin Texas dwi lawyers,New
York Mesothelioma Lawyers,Phoenix dui lawyers,Secured Loans,Insurance Auto,Phoenix dui attorney,car free insurance
online quote,students debt consolidation loans,Pennsylvania mesothelioma lawyers,data recovery Denver,adverse
credit remortgages,bad credit remortgages,data recovery service los angeles

News junkie's open-source project links Bitcoin with publishers


Ankur Nandwani is a news junkie who keeps hitting pay walls. He would pay for content, but not for a subscription.

Nandwani, 27, merged his interest in news with Bitcoin, a virtual currency that many people think will change the future of payments. With co-founders Bo Li and Valerie Chao, he developed Bitmonet, an open-source tool that lets publishers accept micropayments in Bitcoin for news stories.

Bitmonet is just a side project for Nandwani, who has a day job as a senior software engineer in San Francisco. He started analyzing Bitcoin about six months ago and wanted to grow interest in the virtual currency.

"It's all about encouraging bitcoin adoption," Nandwani said. "I think in the early stages of the bitcoin ecosystem.....it's better to increase bitcoin adoption. We can think of making money later."

News publishers have struggled to strike the right balance between generating online revenue and not alienating readers -- already bouncing from one free online news outlet to another -- with pay walls. Many tease users with free stories and gently nudge them to paid subscriptions when they hit a limit.

But charging one-off fees for news stories is a hassle: users don't want to create an account and enter their credit card details for a single news story. It's easier just to move on.

Bitmonet leverages Bitcoin's strength as a digital substitute for cash. In a demonstration on Bitmonet's website, clicking on a story brings up a pop-up window offering a news story for US$0.10, a one-hour pass for $0.15 or a day-long pass for $0.20.

The one-hour pass costs 0.0012 of a Bitcoin. Clicking the "Pay with Bitcoin" button launches Bitcoin wallet software on a person's computer. Web-based wallet software can be used by copying the payment address, Nandwani said.

The transaction is painless: users don't have to enter their financial details or create an account with the publisher.

Bitcoin's peer-to-peer network uses a system of computers called miners to cryptographically verify that a transaction is legitimate. Usually, a transaction needs to garner six "confirmations" before it is considered complete, which at times can take up to three hours.

But Nandwani said merchants can accept as low as one confirmation to let people read the story as soon as possible.

If users choose to buy time-based access to a site, they will have to remember to keep their cookies, which are information files retained by a web browser that are used by websites to remember certain user information.

It's a small sacrifice, but one to keep in mind since many people configure their browsers to delete their cookies for privacy reasons. Nandwani said "we are trying to maintain a balance between creating an account and keeping it frictionless."

Bitmonet is configured now to use BitPay as a payment processor, but it can use different ones. BitPay, based in Atlanta, specializes in processing transactions for merchants. BitPay converts Bitcoin revenue to cash and wires it daily to a merchant's bank account.

Nandwani said Bitmonet plans to add a WordPress plugin in the coming weeks for micropayments on that publishing platform. Other development plans include creating SDKs (software development kits) for Android and iOS that would allow Bitmonet to be used for other things, such as virtual goods, he said.

Microsoft: Talks with US gov't on surveillance transparency break down


Negotiations have broken down between two Internet giants and U.S. government representatives over the companies' requests to publish information on the surveillance requests they receive, a Microsoft executive said Friday.

Microsoft and Google both filed lawsuits in June asking that the companies be allowed to disclose more information about U.S. government surveillance requests they receive. The two companies agreed to extend the government's deadline to respond to the lawsuits during negotiations over recent weeks, but those negotiations have failed, Microsoft General Counsel Brad Smith wrote in a blog post.

"We hoped that these discussions would lead to an agreement acceptable to all," Smith wrote. "While we appreciate the good faith and earnest efforts by the capable Government lawyers with whom we negotiated, we are disappointed that these negotiations ended in failure."

The two companies requested that they be allowed to publish data about the number of surveillance requests they receive after former U.S. National Security Agency contractor Edward Snowden leaked information about the agency's widespread surveillance activities.

"We both remain concerned with the Government's continued unwillingness to permit us to publish sufficient data relating to Foreign Intelligence Surveillance Act (FISA) orders," Smith wrote. "We believe we have a clear right under the U.S. Constitution to share more information with the public."

U.S. Director of National Intelligence James Clapper's announcement Thursday that his office would begin to publish the total number of national security requests each year was a "good start," Smith wrote. "But the public deserves and the Constitution guarantees more than this first step."

Microsoft Canada Microsoft wants to publish information showing the number of national security demands for user content, such as the text of an email, he said.

Microsoft and Google will move forward with their lawsuits after negotiations have broken down, Smith said. The U.S. Department of Justice has a late Friday deadline to respond to both Google's and Microsoft's lawsuits in the U.S. Foreign Intelligence Surveillance Court.


Labels: Purchase Structured Settlements,Mesothelioma Lawyers San Diego,Secured Loan Calculator,Structured Settlement
Investments,Endowment Selling,Mesothelioma Patients,Mesothelioma attorney san diego,Austin Texas dwi lawyers,New
York Mesothelioma Lawyers,Phoenix dui lawyers,Secured Loans,Insurance Auto,Phoenix dui attorney,car free insurance
online quote,students debt consolidation loans,Pennsylvania mesothelioma lawyers,data recovery Denver,adverse
credit remortgages,bad credit remortgages,data recovery service los angeles

Leaked US spying budget reveals investments in 'groundbreaking' cryptanalysis


The U.S. intelligence community is reportedly using a fifth of its US$52.6 billion annual budget to fund cryptography-related programs and operations.

Some of those funds are invested in finding weaknesses in cryptographic systems that would allow breaking encrypted communications collected from the Internet and elsewhere, according to a portion of a top-secret document published Thursday by The Washington Post and obtained from former National Security Agency contractor Edward Snowden.

The document is the fiscal year 2013 budget proposal summary for the National Intelligence Program, which spans 16 agencies with over 107,000 employees. The entire report called "FY 2013 Congressional Budget Justification" has 178 pages, according to the Post, but the newspaper only published 17, including a 5-page statement signed by U.S. Director of National Intelligence James Clapper.

In his statement, Clapper listed the primary areas of investment for the intelligence community which included Signals Intelligence (SIGINT). In respect to SIGINT he wrote: "We are bolstering our support for clandestine SIGINT capabilities to collect against high priority targets, including foreign leadership targets. Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic."

Cryptanalysis is the science of analyzing cryptographic systems in order to find weaknesses that would allow obtaining the contents of encrypted messages without advance knowledge of the encryption key.

Previous documents leaked by Snowden revealed that the NSA is collecting Internet communications en-masse with the help of telecommunication and technology companies. U.S. companies that operate the backbone telecommunications and Internet infrastructure are paid millions of dollars every year by the government to allow the NSA to collect data as it moves through their fiber-optic cables and networks, the Post reported Thursday.

The newly leaked budget reveals that this money is paid through a project called the "Corporate Partner Access" that was expected to cost $278 million during fiscal year 2013, the newspaper said. There are some other payments for "Foreign Partner Access" totalling $56.6 million, although it's not clear if these are for foreign companies, foreign governments or other entities.

The NSA's mass upstream interception of Internet traffic has prompted many people in the security community to wonder what the agency's crypto-cracking capabilities might be in relation to encryption schemes and protocols that are in widespread use on the Internet today. Some crypto experts believe that there is not reason to believe the NSA can crack strong encryption algorithms vetted by scientists, but others said that the feasibility of breaking widely used encryption protocols like SSL/TLS depends on various factors, like key size and other configurations.

While the leaked budget document does not provide details about the NSA's ability to crack encrypted communication, it does confirm that cryptography and cryptanalysis are one of the U.S. intelligence community's key areas of interest.

Twenty-one percent, or roughly $11 billion, of the 2013 budget was intended for the Consolidated Cryptologic Program (CCP), which includes NSA programs and is staffed by around 35,000 employees. This makes it the second most expensive program of the intelligence community after the Central Intelligence Agency program, which was supposed to receive 28 percent of the funds.

Of the $11 billion used to fund the CPP, around $2.5 billion, or 23 percent, were intended for "collection and operations" and $1.6 billion, or 15 percent, for "processing and exploitation." The program's biggest expenses were estimated in the "enterprise management and support" category which was set to receive 26 percent of the funds.

Lables: Purchase Structured Settlements,Mesothelioma Lawyers San Diego,Secured Loan Calculator,Structured Settlement
Investments,Endowment Selling,Mesothelioma Patients,Mesothelioma attorney san diego,Austin Texas dwi lawyers,New
York Mesothelioma Lawyers,Phoenix dui lawyers,Secured Loans,Insurance Auto,Phoenix dui attorney,car free insurance
online quote,students debt consolidation loans,Pennsylvania mesothelioma lawyers,data recovery Denver,adverse
credit remortgages,bad credit remortgages,data recovery service los angeles

Big investor gets option to join Microsoft board


A week after Steve Ballmer said he plans to step down as CEO of Microsoft, ValueAct Capital, one of its biggest investors, has secured the right to appoint its president to Microsoft's board.

Microsoft has signed a "cooperation agreement" with ValueAct that allows its president, Mason Morfit, to meet regularly with Microsoft board members to discuss "a range of significant business issues," Microsoft said in a statement Friday.

The agreement gives ValueAct the option of having Morfit join Microsoft's board, beginning at the first quarterly board meeting after Microsoft's next annual shareholder meeting.

ValueAct Capital, in San Francisco, holds about 0.8 percent of Microsoft's outstanding stock and is one of its largest shareholders, Microsoft said. The firm manages about $12 billion in assets.

The development, announced Friday afternoon ahead of the Labor Day weekend in the U.S., comes a week after Ballmer's surprise announcement that he will step down as CEO of Microsoft at some point in the next year.

Longtime Microsoft analyst Rick Sherlund has said that Microsoft was under pressure from ValueAct to increase value for shareholders.

Microsoft CanadaIts not clear how big a role ValueAct played here, but I suspect they were a strong catalyst for change, Sherlund told the Seattle Times shortly after Ballmer announced his plan to retire.

Ballmer responded that his retirement had nothing to do with ValueAct. My retirement has everything to do with what I think is the right long-term timing for Microsoft," he told the Times.

In the statement Friday, Morfit said he looked forward to "actively working" with Microsoft at "this critical inflection point in the company's evolution." Ballmer was quoted as saying Microsoft "looks forward to ValueAct Capital's input."

Microsoft didn't immediately return a call for additional comment.

Will software-defined networking kill network engineers' beloved CLI?


SDN (software-defined networking) promises some real benefits for people who use networks, but to the engineers who manage them, it may represent the end of an era.

Ever since Cisco made its first routers in the 1980s, most network engineers have relied on a CLI (command-line interface) to configure, manage and troubleshoot everything from small-office LANs to wide-area carrier networks. Cisco's isn't the only CLI, but on the strength of the company's domination of networking, it has become a de facto standard in the industry, closely emulated by other vendors.

As such, it's been a ticket to career advancement for countless network experts, especially those certified as CCNAs (Cisco Certified Network Associates). Those network management experts, along with higher level CCIEs (Cisco Certified Internetwork Experts) and holders of other official Cisco credentials, make up a trained workforce of more than 2 million, according to the company.


 A CLI is simply a way to interact with software by typing in lines of commands, as PC users did in the days of DOS. With the Cisco CLI and those that followed in its footsteps, engineers typically set up and manage networks by issuing commands to individual pieces of gear, such as routers and switches.

SDN, and the broader trend of network automation, uses a higher layer of software to control networks in a more abstract way. Whether through OpenFlow, Cisco's ONE (Open Network Environment) architecture, or other frameworks, the new systems separate the so-called control plane of the network from the forwarding plane, which is made up of the equipment that pushes packets. Engineers managing the network interact with applications, not ports.

"The network used to be programmed through what we call CLIs, or command-line interfaces. We're now changing that to create programmatic interfaces," Cisco Chief Strategy Officer Padmasree Warrior said at a press event earlier this year.

Will SDN spell doom for the tool that network engineers have used throughout their careers?

"If done properly, yes, it should kill the CLI. Which scares the living daylights out of the vast majority of CCIEs," Gartner analyst Joe Skorupa said. "Certainly all of those who define their worth in their job as around the fact that they understand the most obscure Cisco CLI commands for configuring some corner-case BGP4 (Border Gateway Protocol 4) parameter."

At some of the enterprises that Gartner talks to, the backlash from some network engineers has already begun, according to Skorupa.

"We're already seeing that group of CCIEs doing everything they can to try and prevent SDN from being deployed in their companies," Skorupa said. Some companies have deliberately left such employees out of their evaluations of SDN, he said.

Not everyone thinks the CLI's days are numbered. SDN doesn't go deep enough to analyze and fix every flaw in a network, said Alan Mimms, a senior architect at F5 Networks.

"It's not obsolete by any definition," Mimms said. He compared SDN to driving a car and CLI to getting under the hood and working on it. For example, for any given set of ACLs (access control lists) there are almost always problems for some applications that surface only after the ACLs have been configured and used, he said. A network engineer will still have to use CLI to diagnose and solve those problems.

However, SDN will cut into the use of CLI for more routine tasks, Mimms said. Network engineers who know only CLI will end up like manual laborers whose jobs are replaced by automation. It's likely that some network jobs will be eliminated, he said.

This isn't the first time an alternative has risen up to challenge the CLI, said Walter Miron, a director of technology strategy at Canadian service provider Telus. There have been graphical user interfaces to manage networks for years, he said, though they haven't always had a warm welcome. "Engineers will always gravitate toward a CLI when it's available," Miron said.

Even networking startups need to offer a Cisco CLI so their customers' engineers will know how to manage their products, said Carl Moberg, vice president of technology at Tail-F Systems. Since 2005, Tail-F has been one of the companies going up against the prevailing order.

It started by introducing ConfD, a graphical tool for configuring network devices, which Cisco and other major vendors included with their gear, according to Moberg. Later the company added NCS (Network Control System), a software platform for managing the network as a whole. To maintain interoperability, NCS has interfaces to Cisco's CLI and other vendors' management systems.

CLIs have their roots in the very foundations of the Internet, according to Moberg. The approach of the Internet Engineering Task Force, which oversees IP (Internet Protocol) has always been to find pragmatic solutions to defined problems, he said. This detailed-oriented "bottom up" orientation was different from the way cellular networks were designed. The 3GPP, which developed the GSM standard used by most cell carriers, crafted its entire architecture at once, he said.

The IETF's approach lent itself to manual, device-by-device administration, Moberg said. But as networks got more complex, that technique ran into limitations. Changes to networks are now more frequent and complex, so there's more room for human error and the cost of mistakes is higher, he said.

"Even the most hardcore Cisco engineers are sick and tired of typing the same commands over and over again and failing every 50th time," Moberg said. Though the CLI will live on, it will become a specialist tool for debugging in extreme situations, he said.

"There'll always be some level of CLI," said Bill Hanna, vice president of technical services at University of Pittsburgh Medical Center. At the launch earlier this year of Nuage Networks' SDN system, called Virtualized Services Platform, Hanna said he hoped SDN would replace the CLI. The number of lines of code involved in a system like VSP is "scary," he said.

On a network fabric with 100,000 ports, it would take all day just to scroll through a list of the ports, said Vijay Gill, a general manager at Microsoft, on a panel discussion at the GigaOm Structure conference earlier this year.

"The scale of systems is becoming so large that you can't actually do anything by hand," Gill said. Instead, administrators now have to operate on software code that then expands out to give commands to those ports, he said.

Faced with these changes, most network administrators will fall into three groups, Gartner's Skorupa said.

The first group will "get it" and welcome not having to troubleshoot routers in the middle of the night. They would rather work with other IT and business managers to address broader enterprise issues, Skorupa said. The second group won't be ready at first but will advance their skills and eventually find a place in the new landscape.

The third group will never get it, Skorupa said. They'll face the same fate as telecommunications administrators who relied for their jobs on knowing obscure commands on TDM (time-division multiplexing) phone systems, he said. Those engineers got cut out when circuit-switched voice shifted over to VoIP (voice over Internet Protocol) and went onto the LAN.

"All of that knowledge that you had amassed over decades of employment got written to zero," Skorupa said. For IP network engineers who resist change, there will be a cruel irony: "SDN will do to them what they did to the guys who managed the old TDM voice systems."

But SDN won't spell job losses, at least not for those CLI jockeys who are willing to broaden their horizons, said analyst Zeus Kerravala of ZK Research.

"The role of the network engineer, I don't think, has ever been more important," Kerravala said. "Cloud computing and mobile computing are network-centric compute models."

Data centers may require just as many people, but with virtualization, the sharply defined roles of network, server and storage engineer are blurring, he said. Each will have to understand the increasingly interdependent parts.

The first step in keeping ahead of the curve, observers say, may be to learn programming.

"The people who used to use CLI will have to learn scripting and maybe higher-level languages to program the network, or at least to optimize the network," said Pascale Vicat-Blanc, founder and CEO of application-defined networking startup Lyatiss, during the Structure panel.

Microsoft's Gill suggested network engineers learn languages such as Python, C# and PowerShell.

For Facebook, which takes a more hands-on approach to its infrastructure than do most enterprises, that future is now.

"If you look at the Facebook network engineering team, pretty much everybody's writing code as well," said Najam Ahmad, Facebook's director of technical operations for infrastructure.

Network engineers historically have used CLIs because that's all they were given, Ahmad said. "I think we're underestimating their ability. "

Cisco is now gearing up to help its certified workforce meet the newly emerging requirements, said Tejas Vashi, director of product management for Learning@Cisco, which oversees education, testing and certification of Cisco engineers.

With software automation, the CLI won't go away, but many network functions will be carried out through applications rather than manual configuration, Vashi said. As a result, network designers, network engineers and support engineers all will see their jobs change, and there will be a new role added to the mix, he said.

In the new world, network designers will determine network requirements and how to fulfill them, then use that knowledge to define the specifications for network applications. Writing those applications will fall to a new type of network staffer, which Learning@Cisco calls the software automation developer. These developers will have background knowledge about networking along with skills in common programming languages such as Java, Python, and C, said product manager Antonella Como. After the software is written, network engineers and support engineers will install and troubleshoot it.

"All these people need to somewhat evolve their skills," Vashi said. Cisco plans to introduce a new certification involving software automation, but it hasn't announced when.

Despite the changes brewing in networks and jobs, the larger lessons of all those years typing in commands will still pay off for those who can evolve beyond the CLI, Vashi and others said.

"You've got to understand the fundamentals," Vashi said. "If you don't know how the network infrastructure works, you could have all the background in software automation, and you don't know what you're doing on the network side."

Report: The NSA pays millions for US telecom access


When it comes to tapping into U.S. telecommunications networks for surreptitious surveillance, the U.S. National Security Agency can't be accused of not paying its way.

The government agency pays "hundreds of millions of dollars a year" to U.S. telecommunications companies for the equipment and service required to intercept telephone calls, emails and instant messages of potential interest, according to a story in Thursday's Washington Post.

For the current fiscal year, the NSA will pay US$278 million for such access, and had paid $394 million in fiscal 2011, according to the Post.

Although previous news reports of NSA surveillance noted that the agency paid the costs for tapping into communications networks, the exact amount the agency has paid has not been cited before, according to the Post.

One of the largest of the 16 U.S. intelligence offices, the NSA is in charge of collecting and analyzing data to track foreign activities that could be harmful to the U.S. The agency is overseen by the U.S. Department of Defense's Director of National Intelligence.

The practice dates back at least to the 1970s. These data collection programs -- which have gone under names such as Blarney, Stormbrew, Fairview, and Oakstar -- are separate from the PRISM program first publicly unveiled by former NSA contractor Edward Snowden. PRISM collects data from U.S. service providers such as Microsoft, Facebook and Google, whereas with these programs, the NSA collects potential data of interest as it moves across telecommunication gateways.

The article did not provide the names of any telecommunications companies that participate in the program, though notes they typically are paid for the costs of hardware and the labor to install and run the necessary equipment, as well as a certain percentage for profit.

The privacy advocacy group Electronic Privacy Information Center had noted that it is troublesome that the NSA is paying so much to telecommunication companies given that their customers expect that their communications remain private.

Organisations lack visibility on malware attacks, survey confirms


Many organisations affacted by malware in the last year either had no idea how it had bypassed their security or simply suspected their expensively-assembled antivirus defences had failed to detect it, a survey by reputation vendor Bit9 has found.

Reading between the lines of the firm’s 2013 Cyber Security Survey, an unexpected fatalism starts to emerge from the numbers.

It was not a huge surprise that seven out of ten of the 250 US, UK IT managers who responded identified the PC (i.e. not tablets or smartphones) as the soft underbelly. That much has been known for some time; security staff understand that Windows is seen by cybercriminals as the most easily-prised door into any organisation.

What was more disquieting was that of the 47 percent that had experienced at least one cyberattack, a susprising number seemed unable to work out how malware might have been used in such events. Forty percent believed it had bypassed antivirus, 27 percent that it bypassed network-level security, 25 percent that it had arrived on a USB device, 17 percent while a mobile device (i.e. a laptop) was travelling, while 31 percent admitted they had no idea.

Just over half rated their organisation’s ability to detect suspicious activity before damage was done as being either average, deficient, or in 2 percent of cases, “non-existent.” The problem is visibility. Only forty-two percent of respondents believed their organisation’s ability to monitor files in real-time was good or excellent.

Similarly, many admitted they might struggle to work out which endpoints had been affected in an outbreak, whether in real time or when conducting a retrospective forensic investigation.

“The 2013 Cyber Security Survey shows proof that traditional, signature-based security defences cannot keep up with today’s advanced threats and malware,” said Bit9 CSO, Nick Levay.

“These statistics are in line with what we hear from our customers: security teams have limited to no visibility into what is happening on their endpoints and servers. If malware is suspected, there is no way of knowing which machine it’s running on, if it executed or what it is doing,” he said.

“There are often no historical details to determine when a threat arrived and executed, leading to slow remediation.”

According to Levay, the most astonishing statistic was that 13 percent of those surveyed didn’t even know whether they had experienced a cyberattack or not. Many IT departments were simply struggling to defend themselves using a first-generation security model based on antivirus.

Bit9’s answer is whitelisting technology. It would be an omission in story on the firm not to mention that it had its own security embarrassment earlier this year when an attacker was able to hack one of its digital certificates to install malware at three customers.

“The fact that this happened - even to us - shows that the threat from malicious actors is very real, extremely sophisticated, and that all of us must be vigilant.  We are confident that the steps we have taken will address this incident while preventing a similar issue from occurring again,” Bit9 said at the time.

Researchers create world's smallest, open source drone


Academics in the Netherlands claim they have designed and built the world’s smallest autopilot system for drones.

At four square centimetres, the open source Lisa/S chip (Lost Illusions Serendipitous Autopilot/Small) that the autopilot system runs off is roughly the same size as a €1 coin. Despite its small size, the 1.9g piece of silicon contains everything required to fly a micro aerial vehicle (MAV) without human interaction.

Project leader Bart Remes told Techworld that the biggest challenge with Lisa/S, which is 30 grams lighter than its predecessor, was getting everything to fit onto a 2x2 cm board.


  “The overall strategy of our MAV lab is to make everything small, light and electrically efficient,” said Remes. “If the autopilot is smaller and more efficient you can fly longer or carry more payload.”

The chip's software is based on Paparazzi, a free open source drone autopilot system that's existed since 2003 and is available to everyone.

Remes said he chose to make Lisa/S open source because he wants MAVs to become as popular as mobile phones. “The best way to achieve this is making it available for the public so they can come up with the killer application,” he said in reference to the decision.

“Now the community can test it in much more test environments than we can do here in the lab,” said Remes. “They let us know when an issue arises and help us to solve it.  Thanks to the community the open source autopilots are safer than closed source autopilots.”

Drones have traditionally been largely confined to the military but the team hopes that civil drone applications will become more common and used in everything from agriculture to search and rescue.

“More and more farmers are using drones for monitoring their crops,” said Remes. “It is a way to save money because they only spray fertiliser on the crops that need it, by looking at the data coming from the drone's multi spectrum camera.”

The Lisa/S will compete in the International MAV competition next month.

Monday 2 September 2013

Samsung starts mass production of DDR4 memories


Samsung Electronics has started mass producing DDR4 memories that it expects will go into enterprise servers in next-generation data centers.

A successor to the DDR3 (Double Data Rate 3), DDR4 memories are expected to offer higher performance, reliability and lower power consumption than its predecessor.

However, there have been some doubts as to whether the market is ready to transition in volumes from DDR3 memories which are still being designed into servers and other products. Some analysts have forecast that the component will get designed into servers and later PCs only by 2015.

Samsung said on Thursday that early market availability of the 4-gigabit (Gb) DDR4 devices, which use 20-nanometer process technology, will create demand for 16GB and 32GB memory modules.


Samsung did not immediately provide information on the schedule for shipment of the new memories. The pricing information is not available, a spokesman said.

Microelectronics standards body JEDEC Solid State Technology Association published in September 2012 the initial DDR4 standard.

Salesforce.com mobile app developers gain security tools


Good Technology has integrated its Dynamics Secure Mobility Platform with Salesforce.com's Mobile SDK to help developers build mobile applications that are more secure and easily managed.

The growing popularity of smartphones and tablets combined with the BYOD (bring-your-own-device) trend presents several challenges to IT departments, including developing mobile applications and then efficiently managing and protecting them. Salesforce.com's Mobile SDK (software development kit) helps with the former and Good's Secure Mobility Platform offers the latter.

The goal with the integration is to make it easier for Salesforce.com developers to build apps compatible with Good's containerization technology, which offers features such as app-level encryption as well as compliance and jailbreak detection. Enterprises can also put in place data loss prevention and automated actions that lock and wipe applications without impacting a user's device or personal data, according to Good.

The Mobile SDK, which is available for Android and iOS, is a key part of Salesforce.com's accelerating mobile push. It lets developers choose between building applications directly for Apple and Google's OSes, web applications or so-called hydrid applications -- which make it possible to embed HTML5 apps inside a native container.

Recently, Salesforce.com announced version 2.0 of the SDK, which added the SmartSync data framework allowing developers to create applications that work with data both off and online.

Good isn't the only mobile management tool vendor that's working with Salesforce. On Tuesday, competitor MobileIron announced Anyware. The hosted enterprise mobility management service lets administrators distribute mobile apps to employees as well as manage their devices from the Salesforce administration console, MobileIron said.

Australian who boasted of hacking to plead not guilty to charges stemming from raid


A 17-year-old Australian who in February claimed to have breached networks at Microsoft and Sony will plead not guilty to charges stemming from a police raid on his home.

Interestingly, none of the charges lodged against Dylan Wheeler relate to his claims to have breached the networks and extracted software tools used to develop games for the XBox One and PlayStation systems.

According to documents shared by Wheeler, he is charged with possession of child exploitation material, dishonestly obtaining credit card information, possession of identification information with the intent of committing an offense, and disobeying a data access order to reveal his passwords.

Wheeler said Friday he also faces a weapons charge related to a stun gun that police seized from his family's home, and possession of drug paraphernalia.

He told Perth Children's Court on Friday how he intends to plead, and said he will formally plead not guilty in a hearing scheduled for Nov. 11.

Wheeler maintains he is innocent, and believes the charges are in part retribution.

The police "were pissed off at the fact that I went to the media," Wheeler said.

Western Australian Police in Perth declined to comment on the case.

Eventually, Wheeler said, he expects hacking charges to be filed against him. He has been open about his probes into Microsoft's and Sony's networks, and said he told Microsoft about weaknesses in its network.

"To my knowledge they [Microsoft] fixed up a lot of the problems they had," he said, while Sony "did try and fix the issues."

But Wheeler did provoke Microsoft. In August 2012, he posted an eBay listing for a "Microsoft Xbox Durango Development Kit." That same month he was visited by an investigator with Microsoft's IP Crimes Team.

In February, he placed another eBay auction listing for a "Durango" PC. The listing expired on Feb. 19, the same day police raided his family's home, seizing three Apple computers, a 1TB hard drive, credit cards, his mobile phone and a stun gun, among other items.

Wheeler's lawyer, Marc Saupin, said Friday that Australian legal rules prevent counsel from commenting about an ongoing case.