Aug 16, 2017

$550 dock turns a smartphone into a medical lab

Retrofitting medical technology onto smartphones isn't anything new. We've already seen innovation in HIV testing and fertility tracking, for example. But researchers say the TRI analyzer boasts a wider spectrum of applications, and the relatively cheap, portable nature of the kit means it could have uses in other sectors such as animal health, food safety and environmental monitoring, as well as health diagnostics.

"Our TRI Analyzer is like the Swiss Army knife of biosensing," said Professor Brian Cunningham. "It's capable of performing the three most common types of tests in medical diagnostics, so in practice, thousands of already-developed tests could be adapted to it."

from The Unofficial Apple Weblog (TUAW)

Aug 10, 2017

AssemblyAI wants to put customized speech recognition within reach of any developer

It’s clear that voice is becoming a major interface, as we witness the rise of the Amazon Echo, Google Home, Siri, Cortana and their ilk. We’re also seeing an increasing use of chat bots and other voice-driven tools, which often require speech recognition with a very specific vocabulary.

That’s where AssemblyAI, a member of the Summer ’17 Y Combinator class comes in. The startup is building an API that will help developers build customized chat interfaces quickly.

“We’re building an API for customized speech recognition. Developers use our API for transcribing phone calls or creating custom voice interfaces. We help them recognize an unlimited number of custom words without any training,” Dylan Fox, AssemblyAI’s founder told TechCrunch.

He says, most off-the-shelf speech recognition APIs are designed to be one size fits all. If you want to customize it, it gets really expensive. AssemblyAI hopes to change that.

When Fox was working at his previous job as an engineer at Cisco, he saw first-hand how difficult it was to create a speech recognition program with custom words. It usually involved a lot of engineering resources and took a long time. He came up with the idea of AssemblyAI as a way to make it easier, less costly and much faster. He added, that recent advancements in AI and machine learning have made it possible to do what his company is doing now.

It’s worth noting that the tool requires GPUs, rather than CPUs, for increased processing power because the task is so resource-intensive. Getting access to a sufficient number of GPUs to build and run the tasks has been a challenge for the three-person startup, but their affiliation with Y Combinator has helped in that regard. It’s also brand new tech, so they have to solve every problem they encounter on their own. There are no books to read or solutions to look up on Google. Right now the team consists of just three people, creating the tool, while trying to build a company.

Even though they are just three people, they believe user experience is going to be key to their success, so they have one team member fully devoted to developing the front end. They claim that no training is required to run the API. You just upload a list of terms or names and the API takes care of the rest.

Fox fully recognizes that it’s hard for startup to build a speech recognition tool without constantly worrying about the bigger companies swooping in and grabbing their market share, but he says his company is working hard to differentiate itself as a go-to tool for developers.

“As a smaller company focused on a speech recognition technology, we can provide a better experience [than the bigger companies].” He says that means paying attention to the little things that attract developers to a tool like better documentation, simpler integration and just making it easier to use overall.

So far the product is in private beta with several companies deploying it on GPUs in the cloud, but it’s early days. He says when the customers come, they will have to scale to meet those demands using additional cloud-based GPU resources. If it works as described, that shouldn’t be long now.

Featured Image: Bryce Durbin/TechCrunch

from TechCrunch

Jul 27, 2017

Google launches its own AI Studio to foster machine intelligence startups

A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Google’s on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Cloud’s machine learning competition pitch to a panel of top AI investors. And today, Google’s Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.

The thesis is simple — not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.

The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.

“Launchpad, to date, operates in 40 countries around the world,” explains Roy Geva Glasberg, Google’s Global Lead for Accelerator efforts. “We have worked with over 10,000 startups and trained over 2,000 mentors globally.”

This core mentor base will serve as a recruiting pool for mentors that will assist the Studio. Barak Hachamov, board member for Launchpad, has been traveling around the world with Glasberg to identify new mentors for the program.

The idea of a startup studio isn’t new. It has been attempted a handful of times in recent years, but seems to have finally caught on with Andy Rubin’s Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.

On the AI Studio front, Yoshua Bengio’s Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Google’s DeepMind and Facebook’s FAIR. Launchpad Studio won’t have Bengio, but it will bring Peter Norvig, Dan Ariely, Yossi Matias and Chris DiBone to the table.

But unlike Playground’s $300 million accompanying venture capital arm and Element’s own coffers, Launchpad Studio doesn’t actually have any capital to deploy. On one hand, capital completes the package. On the other, I’ve never heard a good AI startup complain about not being able to raise funding.

Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.

“Launchpad has positioned itself as the Google global program for startups,” asserts Glasberg. “It is the most scaleable tool Google has today to reach, empower, train and support startups globally.”

With all the resources in the world, Google’s biggest challenge with its Studio won’t be vision or execution — but this doesn’t guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.

On paper, Launchpad Studio is the Switzerland of Google’s programs. It doesn’t aim to make money or strengthen Google Cloud’s positioning. But from the perspective of founders, there’s bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpad’s Glasberg, Gradient’s Anna Patterson and GCP’s Sam O’Keefe.

The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.

Applications to the Studio are now open — if you’re interested you can apply here. The program itself is stage-agnostic, so there are no restrictions on size. Ideally early- and later-stage startups can learn from each other as they scale machine learning models to larger audiences.

Featured Image: Bryce Durbin

from TechCrunch

Jul 16, 2017

The First Trailer for Ava DuVernay's A Wrinkle in Time Is Here, and It's Fantastic

The first trailer for Ava DuVernay’s adaptation of Madeleine L’Engle’s beloved novel A Wrinkle in Time is finally here, and it’s everything we hoped it would be.

Young actress Storm Reid stars as Meg Murry, with an all-star cast of adult actors backing her up, including the cosmic trio of Oprah Winfrey as Mrs. Which, Reese Witherspoon as Mrs. Whatsit, and Mindy Kaling as Mrs. Who. Gugu Mbatha-Raw and Chris Pine play Meg’s parents, Dr. Kate Murry and the mysteriously-missing Dr. Alex Murry. Newcomer Deric McCabe plays Meg’s gifted little brother, Charles Wallace.

We’re so excited for this movie, we’re gonna overlook the fact that the trailer contains yet another use of an edgy remake of a familiar pop song as its background music. A Wrinkle in Time will be out March 9, 2018.

from Gizmodo: Top

Jul 15, 2017

Google’s life sciences unit is releasing 20 million bacteria-infected mosquitoes in Fresno

Verily, the life science’s arm of Google’s parent company Alphabet, has hatched a plan to release about 20 million lab-made, bacteria-infected mosquitos upon Fresno, California — and that’s a good thing!

You see, the Zika-carrying Aedes aegypti mosquito is prevalent in the area. Earlier this year, a woman contracted the first confirmed case of Zika in Fresno through sexual contact with a partner who had been traveling. Now there’s the fear of most likely inevitable mosquito meets patient if we don’t do something about it. Verily’s plan, called the Debug Project, hopes to now wipe out this potential Zika-carrying mosquito population to prevent further infections.

Could messing with the mosquito population have some unforeseen disastrous consequences? Not likely. This particular mosquito species entered the area in 2013.

So what’s the plan to get rid of them? Verily’s male mosquitos were infected with the Wolbachia bacteria, which is harmless to humans but when they mate with and infect their female counterparts, it makes their eggs unable to produce offspring.

Bonus, male mosquitos don’t bite so Fresno residents won’t have to worry about itching more than they usually would.

No word from the company on how much something like this will cost, but Linus Upson, an engineer on the team releasing the mosquitos told MIT Technology Review the company planned to do something similar in Australia next.

“We want to show this can work in different kinds of environments,” he told the magazine.

Verily plans to release about 1 million mosquitos a week over a 20-week period in two 300 acre neighborhoods in the Fresno area — the largest U.S. release to date of mosquitos infected with the Wolbachia bacteria.

Those in the Fancher Creek neighborhood may notice a Verily van releasing healthy swarm of the little bugs throughout its streets starting today.

Featured Image: Department of Foreign Affairs/Flickr UNDER A CC BY 2.0 LICENSE

from TechCrunch

Jul 12, 2017

Google's new AI acquisition aims to fix developing world problems

As part of its continued push into the AI sector, Google has just revealed that it has purchased a new deep learning startup. The Indian-based Halli Labs are the latest addition to Google's Next Billion Users team, joining the world-leading tech comp...

from The Unofficial Apple Weblog (TUAW)

Jul 5, 2017

Einride's self-driving truck looks like a giant freezer on wheels

The truck uses a hybrid driverless system. While on highways, the T-pod drives itself, but on main roads, a human will remotely manage the driving system. People will also monitor T-pods as they drive on highways in case a situation arises that necessitates human control. Einride is currently working on charging stations for the trucks.

Einride isn't the only company working on driverless shipping trucks. Waymo, Uber and Daimler are among the companies also developing similar vehicles. For shipping at larger scales, self-navigating and remote-controlled ships as well as massive drones are also in the works.

The T-pod prototype isn't fully developed quite yet, but Einride expects to have its first completed truck available to customers in the fall. By 2020, the company plans to have a fleet of 200 goofy-looking trucks that will travel between Swedish cities Gothenburg and Helsingborg, carrying an expected two million pallets per year.

from Engadget

Jun 28, 2017

Google unveils Advr, an experimental Area 120 project for advertising in VR

Google today is more formally taking the wraps off its internal incubator, Area 120, with the launch of a dedicated website, alongside the launch of one of the program’s more interesting projects to date: a way to advertise within VR. The new experiment, which is simply called Advr, involves a cube-like ad format which allows video ads to run in a 3D/VR environment.

Area 120 was launched at Google in March, 2016, as a way to retain entrepreneurial-minded talent at the company, as well as give teams the ability to test new ideas that could eventually become Google products, or be integrated with existing products.

That hasn’t happened yet, as the R&D program is still fairly new.

Much is already known about Area 120, whose name references a famous aspect of Google culture – allowing employees to work on passion projects on the side, using 20 percent of their time. It was often more an idea, than policy though.

Area 120, however, refocuses the concept into a more structured, formal program.

Google isn’t the only major tech giant to run its own internal incubator these days. Microsoft today has its Garage program; while Apple in 2012 tried something similar with Blue Sky.

Google’s Area 120 works a lot like a startup accelerator inside the company. Employees apply to the program during a set period, then Google selects a handful of teams to join. Each “class,” so to speak, has roughly 15 teams who work to prove out their ideas over the course of the next six months. During this time, the employees no longer work their day jobs – they exclusively focus on their Area 120 projects instead.

If projects are successful, Googlers will be allowed to continue to work on them; if not, they’re invited to return to work at Google, in a different role.

Two classes have already been invited into Area 120 since its launch, and Google is taking applications for its third now.

The program is so new that even Google itself doesn’t yet know what the fate of its Area 120 projects will be, or even, more broadly, if the program is worth the investment. It’s possible that some might actually become standalone Google products one day, or be merged with existing ones. Many more will likely be closed, after failing to find traction – as is common with startups, too.

Not all Area 120 projects are public-facing. Some are used internally, or only with select testers on an invite-only basis. Only a few have seen any media coverage. Of those available publicly, Uptime is the most promising. The app, which officially launched this month as the invite requirement was dropped, allows for a YouTube co-watching experience among friends.

Other Area 120 projects include personal stylist Tailor; learn to code app Grasshopper; emoji messenger Supersonic (which is closing down). There are also some projects that aren’t broadly available, like a job-matching service in Bangalore, and a yet-to-launch booking tool called Appointments. Google hasn’t promoted any of the projects until now, however.

But it’s giving Advr a push, with an announcement on the Google Developer blog today.

Advr: Video Advertising in VR

The new project is an experiment focused on figuring out if video ads could work in VR, and if so, how they would function.

The team has developed a plug-in for Unity that can show ads in VR environments. Explains the post, developers aren’t interested in disruptive or hard-to-implement ad experiences in VR, which is how the Advr team came up with the idea for a simple cube.

VR users can optionally engage with the cube either by tapping on it or gazing at it for a few seconds, which then pops open a video player to display the ad. The user can choose to watch the ad or close the player at this point.

The goal is to enable this functionality across a variety of VR platforms, including Google’s own Daydream, Cardboard, and Samsung’s Gear VR, for starters.

This launch doesn’t mean that Advr is how Google thinks advertising in VR should work, it’s just an idea at this point. But it’s not the only tech company with plans for this space – Adobe, too, has been developing VR ad solutions, as have several others.  If Advr were to be successful, though, it would be an example of an Area 120 project that could translate directly to Google’s bottom line.

Advr has already begun running tests with some VR game developers, but Google isn’t disclosing which ones. It is today inviting other VR developers to apply to try out the ad format through an invite-only program that will provide access to the early-stage SDK for Advr.

Interested developers can sign up here.

from TechCrunch

Jun 19, 2017

IKEA plans to improve AR shopping tools with help from Apple

In the first version, customers will take photos of their homes and use the app to place images of IKEA products wherever they might want them. Valdsgaard said users will be able to position products with millimeter precision and sizing of the products will be to scale. Eventually, customers may be able to try out products and then order them through the app.

The IKEA catalog app sort of lets you do this now, but not with much accuracy. In it, you can click on a piece of furniture and see it superimposed over whatever your camera is pointed towards. You can drag the furniture around and get an idea of what it might look like in your home and you can even use the app to make sure the size is accurate. But Valdsgaard says the new app will be better with the help of Apple's technology and experience with AR.

IKEA isn't the only retail company with AR shopping experiences. Both Lowe's and Pottery Barn have apps that let you get an idea of how furniture and appliances will look in your home. Lowe's even uses AR to help customers navigate their stores and VR to give lessons on how to successfully complete certain home improvement projects.

Valdsgaard says they would like to have the app available by the iOS 11 debut, but that it may not be possible. IKEA expects to have 500-600 products available on the app at launch.

from Engadget

Jun 18, 2017

Texas explicitly allows driverless car tests

All self-driving cars have to obey existing traffic laws and carry insurance. They also have to record video, and the manufacturer has to accept liability as long as the self-driving tech remains unmodified. These aren't exactly radical departures (many autonomous cars already have cameras, for one thing), but they establish a baseline. Critics are worried they're too lenient, however. There's no clear requirement that a human operator should be inside, and groups like AAA want a higher minimum insurance coverage than you'd see with conventional cars.

Even so, the new law is important. While it's not going to change minds at Waymo and other companies that were already inclined to test in Texas, it could encourage others to set up shop if they were previously skittish. And testing in Texas is particularly important -- numerous tech giants have offices in the state (particularly in the Austin area), and its warm climate makes it a good testing ground for vehicles that might not be ready to handle snowy roads.

from The Unofficial Apple Weblog (TUAW)

Jun 15, 2017

This self-driving grocery store has no employees

In order to shop at Moby, you first have to download an app to your phone. That's what gets you through the door, which is otherwise kept locked. You then walk through the store -- which is very small, fitting a maximum of four people at once -- and place your purchases into a smart basket. When you're ready, you simply walk out the door. You're automatically charged for the food you purchased. It's a concept that Amazon has been working on, but Wheelys may beat the retail giant to market. What's more, the solar-powered Moby is designed to restock itself automatically, driving to a warehouse, while another identical unit takes its place.

While Wheelys is testing its first Moby store in the bustling city of Shanghai, these autonomous, unmanned stores could also prove very useful in small, rural towns where grocery shops have closed, as well as urban food deserts. "I grew up in the countryside in Northern Sweden," said Tomas Mazetti, one of Wheelys' founders, to Fast Company. "The last store closed there in the 1980s sometime, and after that, everyone just commuted into the city, but that takes an hour. A little piece of the village died. Now, suddenly, in a place like that, the village can team up and buy one of these stores. If the village is really small, [the store] can move around to different villages."

The company is hoping to make these mobile markets affordable for small groups of people. They estimate a community could purchase a Moby store for around $30,000, with an additional fee for logistical support. Eventually, the company wants to expand beyond groceries, as well as test the home delivery services.

from The Unofficial Apple Weblog (TUAW)

May 25, 2017

The Library of Congress Makes 25 Million Records From Its Catalog Free to Download

Image by Carol Highsmith, via Wikimedia Commons

A quick fyi: According to Fortune, The Library of Congress announced that it “will make 25 million records from its catalog available for the public to download.” They add:

Prior to this, the records—which include books and serials, music and manuscripts, and maps and visual materials spanning from 1968 to 2014—have only been accessible through a paid subscription. These files will be available for free download on [the Library of Congress site] and are also available on

This move helps free up the library’s digital assets, allowing social scientists, data analysts, developers, statisticians and everyone else to work with the data “to enhance learning and the formation of new knowledge.” The huge data sets will be available here.

Follow Open Culture on Facebook and Twitter and share intelligent media with your friends. Or better yet, sign up for our daily email and get a daily dose of Open Culture in your inbox. 

If you’d like to support Open Culture and our mission, please consider making a donation to our site. It’s hard to rely 100% on ads, and your contributions will help us provide the best free cultural and educational materials.

Related Content:

The Library of Congress Makes 25 Million Records From Its Catalog Free to Download is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

from Open Culture

May 11, 2017

Bill Wurtz' video presents history of the world in 20 minutes

Bill Wurtz is the guy who made a fantastically entertaining video history of Japan last year. In this video, he's taken on the slightly more ambitious task of presenting the history of the universe, beginning before the formation of matter and quickly focusing on a rapid fire lesson in world history. A+ work!

report this ad

from Boing Boing

May 5, 2017

Scientists have eliminated HIV in mice using CRISPR

An important breakthrough has been made in the eradication of AIDs. Scientists have found they can successfully snip out the HIV virus from mouse cells using CRISPR/Cas9 technology.

Right now patients with the deadly virus must use a toxic concoction of anti-retroviral medications to suppress the virus from replicating. However, CRISPR/Cas9 can be programmed to chop out any genetic code in the body with scissor-like precision, including all HIV-1 DNA within the body. And if you cut out the DNA, you stop the virus from being able to make copies of itself.

First published in the journal Molecular Therapy, the team is the first to show HIV can be completely annihilated from the body using CRISPR. And with impressive effect. After just one treatment, scientists were able to show the technique had successfully removed all traces of the infection within mouse organs and tissue.


However, it’s not a permanent solution and it’s still early days for the crew — the study merely builds on a previous proof-of-concept study they conducted last year and the technique has only been used on mice so far. But, should the scientists be able to replicate their findings, the technique could move to human trials in the future.

“The next stage would be to repeat the study in primates, a more suitable animal model where HIV infection induces disease, in order to further demonstrate elimination of HIV-1 DNA in latently infected T cells and other sanctuary sites for HIV-1, including brain cells,” said co-author of the study Dr. Khalili in a statement. “Our eventual goal is a clinical trial in human patients.”

Featured Image: Nick Harris/Flickr UNDER A CC BY 2.0 LICENSE

from TechCrunch

Apr 21, 2017

Elon Musk’s Neuralink wants to turn cloud-based AI into an extension of our brains

Elon Musk has been working on a Neuralink, a human-computer brain interface company, in whatever spare moments he has between running Tesla and also running SpaceX. Neuralink’s ultimate aim may actually be the most ambitious of all three of his companies, surprisingly, and a new exploration of the foundational ideas behind Neuralink on Wait But Why goes deep within what Musk hopes to achieve by creating better, higher-bandwidth connections between our brains and computers.

Musk has confirmed that he will indeed occupy the CEO role at Neuralink, which means he’ll be the CEO of three separate companies. But Neuralink’s goals definitely sound the most science fictional of all three of his ventures, which is saying something considering Musk’s SpaceX is all about making humans an intergalactic colonial species.

Basically, Musk seems to want to achieve a communications leap equivalent in impact to when humans came up with language – this proved an incredibly efficient way to convey thoughts socially at the time, but what Neuralink aims to do is increase that efficiency by multiple factors of magnitude. Person-to-person, Musk’s vision would enable direct “uncompressed” communication of concepts between people, instead of having to effectively “compress” your original thought by translating it into language, and then having the other party “decompress” the package you send them linguistically, which is always a lossy process.

Neuralink’s tech would also be able to help humans keep pace with the rapid advances in AI, and would achieve this by basically integrating AI with human consciousness. Neuralink’s tech would enable human use of AI as just an additional faculty – like our sense of selves or other higher in-brain thought faculties. Making it possible to connect with such high bandwith directly into the brain would allow us to integrate cloud-based AI computing within our selves in a way that’s indistinguishable from our core selves, Musk proposes, much like how most people would now find it difficult to separate their statements and expressions in language from the parts of the brain that generate them.

This tech is still far away from any kind of broad commercial application – maybe farther than a SpaceX trip to Mars. Musk says that it’s probably going to be at least “eight to 10 years” before tech the company produces can be used by someone without a disability. Neuralink is aiming to create therapeutic applications of its tech first, which will likely help as it seeks the necessary regulatory approvals for human trials.

Musk taking on a third CEO role is bound to raise eyebrows among his company’s investors, but Neuralink’s mission is in keeping with the aim of his other two companies: All three focus on solving problems that present what Musk would term existential threats – Neuralink’s agenda of countering AI not least among them.

Featured Image: Sebastian Kaulitzki/Shutterstock

from TechCrunch

Apr 20, 2017

Alphabet starts collecting health info to better predict disease

For one thing, Verily will sequence the genomes of all 10,000 subjects. That's no mean feat when it costs several thousand dollars per person, but it might help explain the genetic conditions that lead to certain illnesses. The firm also wants to analyze protein sets and the microscopic ecosystems inside the subjects' bodies. Participants will get results throughout the study, so they won't have to wait long to comb over their own data.

With that said, don't expect to hear about any insights for a while. Investigator Adrian Hernandez tells CNBC that it'll be "at least" 5 years before the knowledge from the study is useful to the public. Also, the research fields are still young enough that there's no guarantee they'll be useful to the healthier people in the study. In a sense, though, that's the point. Even if the information from healthy Baseline subjects doesn't turn up anything interesting, it'll tell scientists where to focus their research. And for Verily, this would tell it to shift its attention to its numerous other projects.

from Engadget

Apr 18, 2017

Chef Angela Dimayuga issues the perfect rejection of Ivanka Trump’s lifestyle brand

As she posted about on Instagram, Angela Dimayuga—the executive chef at Mission Chinese Food—was approached by a writer from Ivanka Trump’s website about doing an interview that spotlighted Dimayuga as a female entrepreneur. Dimayuga wrote this in response:

Hi Adi,

Thank you for thinking of me. I’m glad you are a fan of my work so much that you want to provide more visibility for my career to inspire “other working women.” However, I’m for women who actually empower other women.

I don’t believe that is truly “a non-political platform of empowerment for [women]”. So long as the name Trump is involved, it is political and frankly, an option for the business to make a profit.

I don’t see anything empowering about defunding Planned Parenthood, barring asylum from women refugees, rolling back safeguards for equal pay, and treating POC/LGBT and the communities that support these groups like second class citizens.

As a queer person of color and daughter of immigrant parents I am not interested in being profiled as an aspirational figure for those that support a brand and a President that slyly disparages female empowerment. Sharing my story with a brand and family that silences our same voices is futile.

Thank you for the consideration.

Dimayuga also spoke to Elle about her response, explaining, “People, especially in marginalized communities are feeling more encouraged to speak and have their voices and narratives heard, and we all want to hear from them. We are beginning to be both more willing to share with no bullshit, and at the same time learning how to be better listeners.” You can see Dimayuga's message if you click through the Instagram below:

[Photo: Ali Shaker/VOA]

from Boing Boing

Apr 13, 2017

Apple might have a secret team working on glucose sensor technology for diabetes

Apple has hired a group of biomedical researchers to work on a secret project to monitor diabetic patients using sensors, according to a CNBC report.

An estimated 371 million people have the disease worldwide and in the last few years, several tech companies have been trying to come up with better solutions to help those afflicted. Virta, a newly launched startup tackling type 2 diabetes that promises to completely cure patients by remotely monitoring behaviors. Livongo Health is another startup in the Bay Area that just raised $52.5 million to launch its blood sugar monitoring product.

Normally, patients monitor their glucose by pricking themselves to get a blood sample so not having to do that any more would be a real game-changer. One person told CNBC Apple is developing optical sensors that shine a light through the skin to measure glucose.

However, others have tried to come up with technology to bypass pricking before and it’s proven tough to do. Alphabet’s life science company Verily has also tried to take on the disease with a smart contact lens that measures blood glucose levels through the eye but some reports suggest the now three-year-old project isn’t going well.

But Apple’s project has been going on for at least five years, according to the report, and is now to the point where it’s ready to conduct feasibility trials. Apple has also reportedly hired consultants to help it jump through the inevitable regulatory hoops as well.

The team is said to be made up of Johny Srouji, Apple’s senior vice president of hardware technologies and possibly 30 other people — at least a dozen of whom could have come from a frenzy of hires Apple made from the biomedical field, including companies like ZONARE, Vital Connect, Sano and Medtronic.

While we can’t confirm details of the project with Apple (we’ve reached out but have not heard back yet), it would fit well with one of the company’s earlier visions. Steve Jobs believed Apple would one day be at the intersection of technology and biology, according to Walter Isaacson’s biography of him. The Apple Watch is already there, counting steps, calories burned, taking our heart rate and other biological measures. Add on a sensor you can take with you wherever you go and able to detect glucose levels without drawing blood and you’ve officially transformed an entire industry.

Featured Image: Tetra Images/Getty Images

from TechCrunch

Mar 27, 2017

Mass-produced artificial blood is now a real possibility

Doctors dream of having artificial blood always on hand, but the reality has usually been very different. While you can produce red blood cells in a lab, the current technique (which prods stem cells into action) only nets a small number of them at...

from Engadget

Mar 23, 2017

UK data scientists, digital media company team up to develop AI chatbots to triage care for NHS

The University of Essex has a plan to save the National Health Services billions of pounds per year: outsource treatment of minor ailments to a fleet of automated, AI-powered general practitioners, available right on a smartphone.

Through a partnership with digital and social media company Orbital Media and Innovate UK, a group of developers, data scientists and research will collaborate for 30 months to develop photo realistic avatars that will function as primary physician chatbots. People can access the service to get interactive medical information on things like coughs, colds and flu, which fall into the category of self-treatable conditions that the NHS estimates account for nearly $2.5 billion (2 billion British pounds) per year of wasted healthcare spending.

It’s not quite telemedicine, or even a fully-versed virtual health assistant, but the developers call it a “visual, reliable and robust online health advice service, to meet the rapidly growing demand for online symptom searches.”

The University of Essex project will focus more on general, easy-to-treat conditions, but it fits in with the wave of smarter, medically focused search engines that have been cropping up in a few places as of late. Boston-based startup Buoy just launched their AI-powered, health specific chatbot in effort to change the status quo of people Googling their symptoms into panic-stricken oblivion. Doctor-booking platform provider ZocDoc also released a patient-focused search engine that can understand natural human language, relieving the need for a medical degree in order to accurately search for health information online. And in another NHS project, an AI chatbot called Babylon is being tested for non-emergency medical triage in north central London.

Symptom-checking for rare conditions may not be in the cards with this initiative, but it could reduce demand on overworked primary care physicians in the UK.

“GPs are currently under immense pressure, with significant amounts of money devoted to dealing with minor ailments," Orbital Media CEO Peter Brady said in a statement. “This comes at a time when the NHS is required to find $27.4 billion (22 billion British pounds) of efficiency savings by 2020. The potential for AI technologies to help relieve pressure from the heavily burdened primary care system is significant.”

Brady noted that even if costs on minor ailment treatment are reduced by 1 percent, the AI technology could still potentially save the NHS almost $25 million (20 million British pounds) per year. Computer scientists working on the project pointed to AI as playing an instrumental role in developing sustainable healthcare delivery models in the future.

“Artificial intelligence and machine learning technology have the potential to transform so many aspects of our everyday lives,” Dr. Luca Citi, of the University’s Computer Science and Electronic Engineering school said in a statement. “We are excited about this opportunity to work with Orbital Media to see how we can share our expertise to have a significant impact on how health services might be delivered in the future."

from mobihealthnews

Feb 23, 2017

What do productivity, machine learning and next generation teams have in common? Google Cloud Next ‘17.

On March 8-10, Google will host one its largest events ever — Google Cloud Next 2017. In the last year, the Google Cloud team has introduced some new products and solutions to help businesses face some of their biggest productivity problems. Next is our way of bringing together customers and partners under one roof to see the results of all these updates. That includes the latest cloud innovations and more than 200 sessions, where you can check out new products and features firsthand.

While I applaud anyone who figures out a way to attend all 200, there are a few sessions that you should definitely see if you want ideas to help boost your team’s productivity.

One that comes to mind is the Building stronger teams with team-based functionality session. Think about when you work on a project at home. Now think about how you work on a project at work. Do you find that your work’s success depends on a team of people rather than one person? Most would say yes. Yet, historically, productivity tools have focused on helping individuals get more done — like how you manage your inbox or tackle your to-do list. Since we rely on teams to successfully complete tasks, we need tools to help that group be more productive as a whole. It’s a new concept, and I’m excited that this session will share some of the early work that we’re doing to move beyond individual productivity to, instead, use technology to help entire teams achieve more.

Businesses hear all the time about how machine learning can have a positive impact, and many are interested to see how they can achieve that same impact for their companies. Fortunately, Google has always been at the forefront of machine learning technologies like computer vision, predictive modeling, natural language processing and speech recognition.

To that end, I recommend checking out Machine learning powering the workforce: Explore in Google Docs to see how machine learning in G Suite can instantly help you tackle everyday tasks and complex business challenges with the click of a button. Then, follow that up with Introduction to Google Cloud Machine Learning to learn how you can build your very own custom business applications on Google Cloud Platform (GCP).

Whether it's using the Sheets API to give project managers using Asana a way to do deeper comparison of their projects, or using the Slides API to create a deck in Slides from a Trello board in just one click, the ways in which our customers and partners are automating their processes using G Suite APIs are impressive (and growing). The APIs we’re building across G Suite, as part of the larger Cloud platform, are being tailored to solve the most common business flows and the Automating internal processes using Apps Script and APIs for Docs editors session shows how some folks are already using Apps Script to make their internal processes hum.

These are the sessions that excite me, but you can find the sessions that excite you in the full Next '17 agenda. And if you’re wondering, you can still register. Grab your spot and I’ll see you there!

from The Official Google Blog

Feb 7, 2017

RankScience wants to replace your SEO staff

The dark art of search engine optimization could be next in line for software-powered automation — potentially putting hundreds of thousands of ‘SEO consultants’ out of a job.

At least that’s the scenario sketched by RankScience, a YC-backed startup just graduating from accelerator’s winter 2017 program, whose software-as-a-service automates the process of running thousands of A/B tests in order to identify which changes will improve the Google ranking of customer webpages in organic search results.

“Ultimately that’s who we do see ourselves replacing,” says founder Ryan Bedner of the humble human SEO consultant who spends their days eyeballing scores of dashboards to try to identify beneficial tweaks. But whose days of gainful employment could be numbered if manual analytics can be overtaken by automation.

“It is an industry that I think we can disrupt,” he continues. “Humans are tweaking and measuring and changing, and software is really where things are going — and we think this is the start of that.”

So the pitch is: goodbye SEO dashboards and specialized in-house staff; and hello subscription software for automated testing and continuously optimized web pages. RankScience claims an average boost to organic search traffic of 37 per cent within three months, arguing such gains are a substantial step up from the competition — albeit it only has “a couple of dozen” customers at this point.

While Bedner says the SEO automation testing approach works well for “all sorts of different sites”, he flags particular benefits for ecommerce sites, marketplaces, directories, Q&A pages — “anything where companies have lots of pages”.

“Our software identifies areas of opportunity and things that companies should be testing based on where they’re ranking now, what they’re competitors are doing, and what opportunities we see. And also this data that we have from across our network — things that we see that are working elsewhere on similar sites,” he says.

The platform soft-launched in May last year, and the team has not yet done any active marketing. Most of the customers thus far are US-based but it does have some as far afield as Taiwan.

RankScience’s method requires customers to route their web traffic through its CDN in order that it can run thousands of concurrent split tests on their behalf, although it describes the set up for this as very easy; “two minutes” and a simple “one-line code change”, is the claim.

It also claims it’s doing things differently vs most of the startup competition in the SEO space because it’s not just doing analytics; it’s also automating making the changes too — taking a further piece of search optimization hassle out of its customers’ hands (assuming, of course, they’re comfortable relinquishing a little control over how their online content is structured, though it sends weekly reports about changes to keep customers in the loop).

The “most similar” competitor Bedner can be coaxed to name is BloomReach, which does ecommerce SEO for Fortune 500 companies, but he adds: “As far as I know, our continuous, automated split testing software is something they’re not doing. They don’t have a CDN, they do hosting for companies, so I think what we’re doing is differentiated from what they’re doing.”

Bedner says the idea for automating search engine optimization came from his “previous life” as an SEO consultant. “I realized my main advantage, related to other consultants, was that I was a programmer, and so companies could add me to GitHub, and instead of just sending them PDFs with recommended changes I could actually execute the changes myself… Our CDN is an attempt to productize that,” he tells TechCrunch.

“Almost all of the other SEO software products are analytics tools. They give you insights into how you’re doing with rankings, or they maybe make recommendations around things that you should change,” he adds.

“Our product is the only piece of SEO software that actually does work for you. So instead of creating tasks for engineers or product managers, our software actually handles the work for you, and executes for you. Because we’re a CDN — we can actually make changes to your pages. And other products can’t.”

RankScience not only carries out A/B tests for its customers, it also identifies which SEO experiments to run — although at this point it’s not yet fully automated that part of its process. So it remains to be seen how they can scale that, and what impact fewer “human inputs”, as Bedner puts it, will have on the results it can deliver for customers.

“For customers it’s 100 per cent completely automated. On our end it’s mostly software with some human inputs,” he says of the product at this point. “When a company comes on board we do interview them to learn about things they care about etc, so there are some human inputs as well. But the rest is software.

“We identify what to change and the larger our network gets the more powerful the data we have becomes because we know what’s working on other sites across our network — and so we have a good idea of what SEO experiments companies should be running.”

Bedner emphasizes its SEO methods are bona fide ‘White Hat’. So there are no dubious techniques being deployed in a bid to boost search visibility; it’s entirely compliant with Google’s best practices, he says, describing himself as “totally confident” the approach won’t get a customer’s website penalized by Google for trying to game its algorithm.

As a consequence of looping traffic through RankScience’s CDN there is a very small amount of latency involved in the process — “typically around 16m/s of latency or less” — minimal on account of how many websites are now hosted on Amazon Web Services’ cloud service platform, according to Bedner.

“The reason it’s so fast is that most companies now are on AWS and we spin up our CDNs in the same region as companies’ origin web server’s on AWS. That’s one of the reasons that this could’ve never been built before. Now everyone’s basically in the same data center — AWS enables this speed.”

And if customers are using other cloud service platforms — say Google Cloud or Microsoft Azure — he says RankScience can just “spin up our CDNs in the same region”, so the latency will never exceed 16m/s regardless of the platform a customer is using.

The team of five has raised a small pre-seed round from friends and the founders own money. Their next step on graduating from YC will be raising a seed.

“We’re totally focused on scaling right now. We’ve spent this last year focusing on product, and validating a lot of our hypothesis. The past six months we’ve just been trying to add more customers, learn more about how this works for different sites and how people are using this. So we’re just totally focusing on scaling at this point, and improving the product,” he adds.

Featured Image: Global Panorama/Flickr UNDER A CC BY 2.0 LICENSE

from TechCrunch

Jan 23, 2017

Top 10 Free Alternatives to Expensive Software

Unless you have a company buying licenses for you, tools like Photoshop, Pro Tools, Maya, or even Windows can be a serious bite out of your wallet. Here are some more affordable options to those normally pricey apps that’ll help you stop fretting and get more done.


from Gizmodo: Top

Jan 10, 2017

The iPhone's legacy, 10 years later

January 9th, 2017 is a milestone day in the technology world: It's the 10th anniversary of Apple's iPhone. Yes, it's been a full decade since Steve Jobs took to the stage and introduced the device that many credit with defining the modern smartphone....

from The Unofficial Apple Weblog (TUAW)