WebProNews

Tag: Developer

  • Microsoft Building Team of Rust Developers

    Microsoft Building Team of Rust Developers

    Microsoft is building a team of Rust developers, both for internal work and collaboration with the community.

    Rust is a relatively new programming language. Syntactically, it’s similar to C++, but is designed to offer better safety, especially in how it handles memory management and concurrency. The language was originally created by a developer at Mozilla, with the organization taking a leading role in its development. Much of Mozilla’s Rust team was laid off in 2020, amid the 250 employees let go.

    Since then, some of the biggest names in tech have been snapping up the Rust developers that were laid off. Microsoft is the latest, posting a job listing for a Rust Principle Software Engineer.

    The job listing makes it clear the engineer will work on internal systems, as well as collaborating with the Rust open source community. The engineer will be part of a newly formed team within the company.

    In this role you’ll work closely with product groups around Microsoft to gather requirements and develop tooling improvements for Rust. You’ll join a newly formed team with a vision to support Rust at Microsoft while also collaborating and sharing those improvements with the broader Rust OSS community.

    You’ll be working along with some of the most talented engineers in Microsoft on important internal systems programming workloads.

    Microsoft and other big companies’ support is good news, both for the Rust language, as well as for the developers laid off by Mozilla.

  • Google’s Flutter Now Boasts 2 Million Users

    Google’s Flutter Now Boasts 2 Million Users

    Google’s open-source user interface (UI) framework (Flutter) has already hit the 2 million user milestone, just 16 months after release.

    Flutter is an open-source framework designed to help developers create applications for a variety of platforms, including Android, Google Fuchsia, iOS, Linux, macOS, Windows and the web. As a result, Flutter makes it easier for developers to create cross-platform apps. Frameworks like Flutter are becoming more and more popular as developers look to target a wider range of users without completely rewriting their code for each target platform.

    According to Tim Sneath, Project Manager for Flutter and Dart, Google continues “to see fast growth in Flutter usage, with over two million developers having used Flutter in the sixteen months since we released. Despite these unprecedented circumstances, in March we saw 10% month-over-month growth, with nearly half a million developers now using Flutter each month.”

    While 35% of users are in startups, some 26% are enterprise, 19% self-employed and 7% work in design agencies. Sneath says there are already 50,000 Flutter apps in the Play Store.

    Creating a programming framework is never easy and many good ideas have fallen by the wayside. In contrast, it seems Google definitely has a hit on its hands.

    Google’s Flutter Now Boasts 2 Million Users

     

    Image Credit: Tim Sneath & Google

  • Core GitHub Features Free For Everyone

    Core GitHub Features Free For Everyone

    GitHub has announced that its core features, including private repositories with unlimited collaborators, are now free for all users.

    GitHub provides one of the most popular platforms for software development version control, as well as collaboration and bug tracking features. Git is used by developers around the world, in companies and organizations of all size.

    In a post on the company’s blog, CEO Nat Friedman made the announcement, saying that “until now, if your organization wanted to use GitHub for private development, you had to subscribe to one of our paid plans. But every developer on earth should have access to GitHub. Price shouldn’t be a barrier.

    “This means teams can now manage their work together in one place: CI/CD, project management, code review, packages, and more. We want everyone to be able to ship great software on the platform developers love.”

    The company is also lowering the price of its paid Team plan from $9/month per user to $4. The change goes into effect immediately.

    Friedman’s announcement is good news for developers and organizations alike.

  • Apple App Store Growing Fast, Paid $20 Billion to App Developers in 2016

    Apple App Store Growing Fast, Paid $20 Billion to App Developers in 2016

    The Apple App Store has paid out over $20 billion to developers in 2016, which is an increase of over 40% over 2015, according to Apple. They also said that January 1, 2017 had the highest dollar volume of app purchases of any single day in the Apple App Store’s history, with over $240 million in sales. Since the App Store launched in 2008, developers have earned over $60 billion.

    “2016 was a record-shattering year for the App Store, generating $20 billion for developers, and 2017 is off to a great start with January 1 as the single biggest day ever on the App Store,” said Philip Schiller, Apple’s senior vice president of Worldwide Marketing. “We want to thank our entire developer community for the many innovative apps they have created — which together with our products — help to truly enrich people’s lives.”

    Apple also noted that December 2016 was an amazing month for App purchases, hitting over $3 billion in sales.

    Subscription billings, which became available just this Fall in all categories, are one of the fastest growing segments of app sales. There are over 20,000 apps that can be subscribed to for a monthly fee including popular services such as Netflix, HBO Now, Line, Tinder and MLB.com At Bat. Subscription based apps generated $2.7 billion in billings in 2016, up 74% over 2015.

    Apple also recently announced a Best of 2016 Music list:

  • Social Media Pulse Model Predicts Tweet Explosions

    Social Media Pulse Model Predicts Tweet Explosions

    “When you notice patterns over time, you have a suspicion that there’s an underlying reason why they happen — and maybe we can encode that in a model so that with just a little bit of data we can predict what will happen,” noted Josh Montague. Montague and Scott Hendrickson, another Data Scientist at Twitter, created what they call “The Social Media Pulse” which is a model that can predict twitter explosions of tweets when big events happen.

    From their report

    In the digitally connected world, social media platforms are often the primary means by which people share their observations and perspectives during significant events. Such platforms present low-friction ways to share their point-of-view experiences, whether in simple text or visual media (photos, videos, gifs, etc.). Because of this ease of sharing, the aggregate data produced by the platform’s users is a rich source of insight into broad, cultural behavior. At times, we can even observe the ways by which these behaviors manifest in platform-specific patterns. Given enough data that displays these patterns, we can begin to develop models based on them.

    An analyst can be prepared to produce both descriptive and predictive results based on observed data by empowering them with models that describe the users and the users’ responses to events. Obtaining a model representation of the data enables the analyst to compare parameters across multiple events (e.g., time scales, coefficient magnitudes); or, for a single event, one could compare similar parameters across multiple social platforms. Such a model could also be a component of broader trend- or event-detection methods, potentially assisting the analyst in handling real-time news media or public relations.

    Their model looks at 3 types of events that can explode on Twitter; Expected Events, Unexpected Network Spread Events and Unexpected Social Media Pulse Events. “With a Social Media Pulse model applied to observed data, one can calculate relevant metrics like an estimated time to the Pulse peak, or total expected Tweet volume,” noted Cassie Stewart, who is in Data Process Marketing at Twitter, in a blog announcement. “The Social Media Pulse model can take on a range of similar shapes, as shown in the figure below that compares three different earthquakes. The resulting fits can then be compared across multiple events to draw comparisons.”

    screen-shot-2016-12-01-at-4-55-31-pm

    “This model is just a start, but with it we have an opportunity to look for, and compare patterns observed on the platform,” says Stewart. “Through the use of analyses like these you can gain quantitative insights into real-time, real-world events, to better quantify the observations and conclusions made from social data streams.”

    The team has released code details with examples to encourage developers to incorporate their model into third party applications. (PDF Download and GitHub Repository)

  • New Amazon F1 Instance Reduces Capital-Intensive and Time-Consuming Steps in App Development

    New Amazon F1 Instance Reduces Capital-Intensive and Time-Consuming Steps in App Development

    Amazon’s is making news about lots of interesting things at its AWS re:Invent 2016 conference currently underway in Las Vegas, and their just announced AWS F1 Instance is no exception.

    “Today we are launching a developer preview of the new F1 instance,” said Jeff Barr, Chief Evangelist at Amazon Web Services. “In addition to building applications and services for your own use, you will be able to package them up for sale and reuse in AWS Marketplace. Putting it all together, you will be able to avoid all of the capital-intensive and time-consuming steps that were once a prerequisite to the use of FPGA-powered applications, using a business model that is more akin to that used for every other type of software. We are giving you the ability to design your own logic, simulate and verify it using cloud-based tools, and then get it to market in a matter of days.”

    Here are the specs on the FPGA (there are up to eight of these in a single F1 instance):

    – Xilinx UltraScale+ VU9P fabricated using a 16 nm process.
    – 64 GiB of ECC-protected memory on a 288-bit wide bus (four DDR4 channels).
    – Dedicated PCIe x16 interface to the CPU.
    – Approximately 2.5 million logic elements.
    – Approximately 6,800 Digital Signal Processing (DSP) engines.
    – Virtual JTAG interface for debugging.

    The F1 instance will significantly speed up applications that are built for a specific purpose. “The general purpose tools can be used to solve many different problems, but may not be the best choice for any particular one,” says Barr. “Purpose-built tools excel at one task, but you may need to do that particular task infrequently.”

    Typically says Barr this requires another balancing act: trading off the potential for incredible performance vs. a development life cycle often measured in quarters or years.

    “One of the more interesting routes to a custom, hardware-based solution is known as a Field Programmable Gate Array, or FPGA,” said Barr. ”
    This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications.”

    “I hope that this sounds awesome and that you are chomping at the bit to use FPGAs to speed up your own applications,” said Barr. “There are a few interesting challenges along the way. First, FPGAs have traditionally been a component of a larger, purpose-built system. You cannot simply buy one and plug it in to your desktop. Instead, the route to FPGA-powered solutions has included hardware prototyping, construction of a hardware appliance, mass production, and a lengthy sales & deployment cycle. The lead time can limit the applicability of FPGAs, and also means that Moore’s Law has time to make CPU-based solutions more cost-effective.”

    Amazon believes that they can do better, and that’s where the F1 instance comes in.

    “The bottom line here is that the combination of the F1 instances, the cloud-based development tools, and the ability to sell FPGA-powered applications is unique and powerful,” says Barr. “The power and flexibility of the FPGA model is now accessible all AWS users; I am sure that this will inspire entirely new types of applications and businesses.”

    Developers can sign up now for the Amazon EC2 F1 Instances (Preview).

  • Microsoft Democratizing AI with Cognitive Toolkit Release

    Microsoft Democratizing AI with Cognitive Toolkit Release

    Microsoft is changing folks, it’s no longer the hated technology company that hoards power and technology. Today, Microsoft released to developers an updated version of the Microsoft Cognitive Toolkit that uses deep learning so that computers, using huge data sets, can learn on their own.

    For instance, developers could feed CPUs and NVIDIA® GPUs millions of images of vegetables and it would learn over time which are cucumbers, no matter how distorted and different they appear. It will match was is similar and over time become very good add it. This matching and learning technology is applicable to an infinite number of software solutions.

    It’s free, easy-to-use and open-sourced, that Microsoft says trains deep learning algorithms to learn like the human brain. In fact, it’s helping to change the world while simultaneously changing Microsoft.

    “This is an example of democratizing AI using Microsoft Cognitive Toolkit,” says Xuedong Huang, who is Microsoft’s Chief Speech Scientist.

    Microsoft originally created the Toolkit for internal use. “We’ve taken it from a research tool to something that works in a production setting,” noted Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

    The current version of the toolkit can be downloaded via GitHub with an open source license. It includes new functionality letting developers use Python or C++ programming languages and allows researchers to do a type of artificial intelligence work called reinforcement learning.

    The latest version is also much faster when adding big datasets from multiple computers, which is absolutely necessary in implementing deep learning across multiple GPUs. This allows developers to create smart AI enabled consumer products and enables manufacturers to connect more smart devices, empowering the IoT revolution.

    Deep learning, according to Microsoft, is an AI technique where large quantities of data, known as training sets, literally teach computer systems to recognize patters from huge quanities of images, sounds or other data.

    screen-shot-2016-10-25-at-7-40-07-pm

    Just last week, Microsoft announced an historic voice recognition breakthrough, reaching virtual parity with human speech. Microsoft’s AI team credited Microsoft Cognitive Toolkit speed improvement in allowing them to reach this level of performance so soon.

  • Google+ Upgrades to a core Google Apps for Work Service

    The Google Apps Team has announced on their blog that starting today, Google+ is considered a core Google Apps for Work service when used within a domain or corporate intranet. This gives Google+ the same level of technical support as their other core services such as Google Drive or Gmail.

    The Apps Team says that most current Google Apps agreements will suffice and that Google+ will be compliant with all of the same terms, conditions and service levels described in the Google Apps Technical Support Services Guidelines and the Google Apps Service Level Agreement. Google+ will be added to the Apps Service Dashboard (ASD) and to the Admin console.

    Desktop users will now be automatically upgraded to the current Google+ version and will no longer have an option to stay with a “classic” version. Mobile apps users of Google+ should upgrade to the current version since older versions will not be supported any further.

    “Engaging people is key to innovation, and many Google Apps for Work domains have found Google+ to be an invaluable tool that helps drive active engagement and cultivate innovative ideas from all levels of the organization,” notes the Google Apps Team. “By making Google+ the newest core service for Google Apps, Google+ will now provide the support and guaranteed SLAs that businesses need.”

    Google+ will continue to be a non-core service for Google Apps for Education domains.

  • Amazon Kinesis Analytics Announced for AWS Developers

    Amazon Kinesis Analytics Announced for AWS Developers

    Amazon Web Services (AWS) is making available Amazon Kinesis Analytics, enabling continuous querying of streaming data using standard SQL. This allows developers to create SQL queries on live and continuous data for real-time analysis. No new programming skills are needed.

    “AWS’s functionality across big data stores, data warehousing, distributed analytics, real-time streaming, machine learning, and business intelligence allows our customers to readily extract and deploy insights from the significant amount of data they’re storing in AWS,” said Roger Barga, General Manager, Amazon Kinesis. “With the addition of Amazon Kinesis Analytics, we’ve expanded what’s already the broadest portfolio of analytics services available and made it easy to use SQL to do analytics on real-time streaming data so that customers can deliver actionable insights to their business faster than ever before.”

    Screen Shot 2016-08-15 at 10.53.28 AM

    Amazon Kinesis Analytics processes streaming data with less than 1-second processing latencies, enabling you to analyze and respond in real time. According to Amazon it provides built-in functions that are optimized for stream processing, like anomaly detection and top-K analysis, so that you can easily perform advanced analytics.

    “You can now run continuous SQL queries against your streaming data, filtering, transforming, and summarizing the data as it arrives,” said AWS Chief Evangelist Jeff Barr. “You can focus on processing the data and extracting business value from it instead of wasting your time on infrastructure. You can build a powerful, end-to-end stream processing pipeline in 5 minutes without having to write anything more complex than a SQL query.”

    “When I think of running a series of SQL queries against a database table, I generally think of the data as staying more or less static while the queries come and go pretty quickly,” Barr explained. “Rows are added, changed, and deleted all the time, but this does not generally matter when considering a single query that runs at a particular point in time. Running a Kinesis Analytics query against streaming data turns this model sideways. The queries are long-running and the data changes many times per second as new records, observations, or log entries arrive. Once you wrap your head around this, you will see that the query processing model is very easy to understand: You build persistent queries that process records as they arrive.”

    AWS customers can employ Amazon Kinesis Analytics in minutes by going to the AWS Management Console and selecting a Kinesis Streams or Kinesis Firehose data stream. Amazon says that Kinesis Analytics takes care of everything required to continuously query streaming data, automatically scaling to match the volume and throughput rate of incoming data while delivering sub-second processing latencies.

  • Snap a Photo and Find Related Items on Pinterest

    Snap a Photo and Find Related Items on Pinterest

    Pinterest is on the cutting edge of visual recognition technology which matches images with other similar images accurately enough to show related items next to images that have no text at all. Coming soon is a feature that will allow Pinterest users to take a photo of an object like a purse and instantly compare that real world object with other similar ones.

    Today, Pinterest introduced Automatic Object Detection for its most popular categories, allowing visual based searches for products in a Pin’s image.

    “As we look to the future of visual search, we’re also starting to preview new camera search technology that’ll give Pinners recommendations for the products they find in the real world,” stated Dmitry Kislyuk, a software engineer on Pinterest’s Visual Search Team. “Pinners will soon be able to snap a photo of a single object like sneakers – and get recommendations on Pinterest, or even take a photo of an entire room and get results for multiple items.”

    Many people don’t realize the extent of R&D that Pinterest has invested and the intensity of their focus on visual recognition and visual search research. “Visual search is one of the many fields transformed in recent years by the advances in deep learning,” Kislyuk said. “Convolutional neural networks represent images and videos as feature vectors which preserve both semantic concepts and visual information, and allows for fast retrieval when using optimized nearest neighbor techniques.”

    Pinterest’s Kislyuk elaborated in a blog post:

    “We leveraged this idea, along with our richly annotated image dataset, last November when we released a visual search product that makes searching inside a Pin’s image as simple as dragging a cropper. For our initial launch, we extracted the fully-connected-6 layer of a fine tuned VGG model over a billion Pinterest images and indexed them into a distributed service, as described in our KDD paper.”

    The goal is to use automatic object detection in order to make visual search a seamless experience on Pinterest. Detecting objects in visual search allows Pinterest to do object-to-object matching. Then, if you see a chair you like at a store or someones house you or you find that perfect chair on Pinterest you will be able to view it in various decorative home settings.

    From Pinterst:

    Building automatic object detection

    Our first challenge in building automatic object detection was collecting labeled bounding boxes for regions of interest in images as our training data. Since launch, we’ve processed nearly 1 billion image crops (visual searches). By aggregating this activity across the millions of images with the highest engagement, we learn which objects Pinners are interested in. We aggregate annotations of visually similar results to each crop and assign a weak label across hundreds of object categories. An example of how this looks is shown in the heatmap visualization below, where two clusters of user crops are formed, one around the “scarf” annotation, and another around the “bag” annotation.

    Read more on this at the Pinterest Engineering Blog.

  • Google Launches New Android Feature ‘Nearby’

    Google launched a cool… and I think very useful feature to Android today called Nearby. Nearby uses Bluetooth to seek out nearby beacons that are connected to Android apps on your phone. Developers of ecommerce or in-store related apps are going to be staying up late working to incorporate Nearby because of its potential to bring in more sales.

    Nearby was announced back in May:

    Nearby can be used for any app that communicates provides the mobile phone user with real-time data. For instance:

    • New information and multi-media can open in a museums app as a person passes each exhibit. such as when a person at a museum getting further information and media related to the exhibit they are standing in front of.
    • As you walk down the aisle at Kroger’s a Kroger app can offer you app-only deals as you pass by items on the shelf.
    • As you stand in line at the DMV in California it could ask you to feel out certain forms in the app and then direct you to a line number that handles your type of need such as renewing your cars registration.
    • At the car dealership, an app from the dealer could offer you a deal that isn’t on the sticker, possibly timed by how long you stood close to a particular car.

    The Google Android Blog offered some additional examples:

    • Print photos directly from your phone at CVS Pharmacy.
    • Explore historical landmarks at the University of Notre Dame.
    • Download the audio tour when you’re at The Broad in LA.
    • Skip the customs line at select airports with Mobile Passport.
    • Download the United Airlines app for free in-flight entertainment while you wait at the gate, before you board your flight.

    Google says that to use Nearby, just turn on Bluetooth and Location, and they’ll show you a notification if a nearby app or website is available. According to Google Nearby has started rolling out to users as part of the upcoming Google Play Services release and will work on Android 4.4 (KitKat) and above.

    The Google Developers Blog also provided information for developers to incorporated Nearby within Android apps:

    Getting started is simple. First, get some Eddystone Beacons- you can order these from any one of our Eddystone-certified manufacturers. Android devices and and other BLE-equipped smart devices can also be configured to broadcast in the Eddystone Format.

    Second, configure your beacon to point to your desired experience. This can be a mobile web page using the Physical Web, or you can link directly to an experience in your app. For users who don’t have your app, you can either provide a mobile web fallback or request a direct app install.

    Nearby has started rolling out to users as part of the upcoming Google Play Services release and will work on Android devices running 4.4 (KitKat) and above. Check out our developer documentation to get started. To learn more about Nearby Notifications in Android, also check out our I/O 2016 session, starting at 17:10.

    For all the information that developers will need go to the Google Nearby Developer site.

  • Watch F8 Live Right Here Via Facebook’s Official Stream

    Watch F8 Live Right Here Via Facebook’s Official Stream

    Facebook’s F8 developer conference begins today in San Francisco, but you don’t have to be there to see the opening keynote from Mark Zuckerberg or some of its stand-out sessions.

    You can view these right from the player below beginning at 10AM PST (streaming will actually begin at 9:30AM PST).

    Here’s the schedule for what you’ll be able to stream from the player today and tomorrow via an email from a Facebook spokesperson.

    Day 1 (April 12)

    10AM PT: Keynote
    12PM PT: Messenger: Connecting People and Businesses
    12:30PM PT: Growing Your Business with Facebook Pages
    1PM PT: Onboarding and Account Management for Apps
    2PM PT: Creating Value for News Publishers and Readers on Facebook
    3PM PT: Deeper Insights with Facebook Analytics for Apps

    Day 2 (April 13)

    10AM PT: Keynote
    12PM PT: The Technology Behind 360 Video
    1PM PT: Optimizing 360 Video for Oculus
    2PM PT: Leveraging Facebook as a Platform for eCommerce
    2:30PM PT: Building iOS Tooling at Facebook Scale

  • Search Bing For Code, Edit It Right In Search Results

    Search Bing For Code, Edit It Right In Search Results

    Bing has teamed up with HackerRank on a new way to search code and see it run in a live coded editor within Bing’s search results.

    Rather than looking through sites like Stackoverflow, Stackexchange, and blogs, you can search from Bing and save a great deal of time. At least that’s what they’re promising. Just search a query, hit enter, and get a solution along with the ability to edit the code in real time.

    Bing Group Engineering Manager Marcelo De Barros and HackerRank CEO and co-founder Vivek Ravisankar wrote a joint blog post announcing the news (h/t: Search Engine Journal). They said:

    It’s a typical night. You’re in the zone with a half-full Red Bull by your side. You’ve come a long way in learning a brand new programming language when—bam—you run into problem you’re not quite sure about. So, you do what any programmer would do in your situation: You search for the solution.

    This is one of the most common productivity pitfalls for programmers today. If you want to improve on or learn a new algorithm, you search in engines and figure out which blue link to click. Then, you have to transfer all of this into your editor. You trial and error until you find the right solution. If only there was a way to search a function and immediately see the solution in one step.

    This is the type of scenario they’re aiming to help with.

    The functionality is live now C, C++, C#, Python, PHP, and Java are supported. Just go to Bing and search accordingly.

    Image via HackerRank/Bing

  • Facebook F8 2016 Conference Is Open For Registration

    Facebook F8 2016 Conference Is Open For Registration

    Facebook just announced that registration is now open for F8 2016. IT takes place in San Francisco on April 12 & 13.

    F8 2016 – Registration is Now Open!

    Come meet with other developers and innovators who are excited about connecting the world! We're hosting F8, our global developer conference on April 12 + 13, in San Francisco, California.

    Posted by Facebook for Developers on Friday, January 29, 2016

    Here’s what you can expect from the event, according to a post on the Facebook Developers blog:

    – Keynote: Hear from Mark about how Facebook is helping developers build, grow and monetize success, and where we’re headed in the future.

    – 40+ Sessions: Attend talks from Facebook, Messenger, WhatsApp, Oculus, Instagram, advertising, engineering and more — there’s content for everyone across Facebook’s family of apps and services.

    – Interactive Demos: Experience our immersive products for yourself with hands-on demos featuring Oculus Rift and Touch, 360 video and more.

    – Innovation Lab: Learn how we plan to improve connectivity, enhance infrastructure and scale artificial intelligence and virtual reality around the world.

    – Meet the Facebook Team: Connect with our engineers and product experts for 1:1 personalized support to learn how you can grow and optimize your app business.

    – After Party: Join us for a fun after party at the end of the first conference day, and enjoy music, food and drinks while getting to know other developers.

    You can apply for the event here. Space is limited, but they will be streaming the whole thing online as usual.

    Image via Facebook

  • Apple Discontinues iAd App Network

    Apple announced that it is discontinuing the iAd App Network this summer. To be clear, it’s not shutting down iAd itself. App developers will still be able to run ads in their apps. The iAd App Network is an offering that allows developers to feature ads for their own apps across the publisher network.

    Either way, developers using the iAd App Network will still be able to use it throughout the first half of this year.

    A message on developer.apple.com says:

    The iAd App Network will be discontinued as of June 30, 2016. Although we are no longer accepting new apps into the network, advertising campaigns may continue to run and you can still earn advertising revenue until June 30. If you’d like to continue promoting your apps through iAd until then, you can create a campaign using iAd Workbench. We will continue to keep you updated, but if you have any questions, contact us.

    The news follows a report from last week indicating that Apple is phasing out its in-house iAd sales team in favor of a new publisher-driven system.

    Image via apple

  • Facebook Launches Audience Network

    Facebook Launches Audience Network

    Facebook introduced the Audience Network at its f8 developer conference in April. This is its new mobile ad network, which lets mobile apps monetize through Facebook’s 1.5 million active advertisers.

    The network is now “open for business,” as the company puts it.

    “Over the past few months, we’ve optimized our network to improve performance, and today we’re formally launching and extending the service to more developers and publishers across the globe,” says Facebook’s Tanya Chen.

    “The Audience Network shows people the right ads by extending Facebook’s targeting to third-party apps,” she says. “This means the ads match the interests of people, just as they do on Facebook. It also means people are more likely to engage with the ads.”

    Existing advertisers can extend their Facebook campaigns to the Audience Network with a click.

    There are native, banner, and interstitial formats on the Audience Network.

    Deezer, Le Monde, Wooga, Zynga, Outfit7, Cheetahmobile, Vinted, Merriam-Webster, Shazam, Glu, MyFitnessPal, and IGN are among existing partners.

    Image via Facebook

  • WWDC: Here’s How to Watch Apple’s Event Live

    WWDC: Here’s How to Watch Apple’s Event Live

    Apple, who is rather inconsistent with their live streams, has decided to let the world in on their Worldwide Developer’s Conference keynote today. You can stream it live on Apple’s event site–if you have the right Apple device.

    In order to stream the WWDC keynote via Apple’s official site, you’ll need Safari 4 or later on OS X v10.6 or later; or Safari on iOS 4.2 or later. You can also stream it on your Apple TV, you’ll need second or third-gen running 5.0.2 or later.

    The annual conference kicks off today with the keynote, which will begin at 10am PDT.

    The conference itself will run through June 6th.

    Though Apple is tight-lipped (as always) about what’s going down at WWDC this year, it’s likely that we’ll get to see the new iOS 8 and OS X. It’s also possible that we’ll see some additional hardware announcements. Some rumors suggest we could see Apple step into the world of med-tech with a health tracking platform.

    Of course, there’s really no use in speculating. Just watch the keynote here in a few hours to find out where Apple’s turning their focus in 2014. If you’re tuning in to see a new iPhone or iPad, you’reprobably going to be disappointed.

    Image via Apple

  • Apple’s WWDC Keynote Will Be Live Streamed Next Week

    Apple has just announced that its annual Worldwide Developers Conference will be available to stream online this year (or at least the keynote address). Apple has a history of streaming some major events and leaving us in the dark of others, so this is good news for those who want to see what Apple has in store for the next year right as it happens.

    The keynote will stream at 10am PDT on June 2nd. The conference itself will run through June 6th. Bookmark this page for the livestream. Do note, however that you’ll need Safari 4 or later on OS X v10.6 or later; or Safari on iOS 4.2 or later. If you want to stream it on your Apple TV, you’ll need second or third-gen running 5.0.2 or later.

    “We have the most amazing developer community in the world and have a great week planned for them,” says Philip Schiller, Apple’s senior vice president of Worldwide Marketing. “Every year the WWDC audience becomes more diverse, with developers from almost every discipline you can imagine and coming from every corner of the globe. We look forward to sharing with them our latest advances in iOS and OS X so they can create the next generation of great apps.”

    We’re likely to see the new iOS 8 and new version of OS X next week as Apple’s top brass take the stage at Moscone West in San Francisco. Recent rumors have sprouted, saying that Apple could unveil a new home automation system. MacRumors says we’ll likely get some additional hardware announcements, but we’re not likely to see a new iWatch or Apple TV model.

    Image via Apple

  • Mark Zuckerberg Buys 4 New Homes For More Privacy

    Mark Zuckerberg Buys 4 New Homes For More Privacy

    Mark Zuckerberg is worried about privacy, and as a result he decided to buy four houses that were owned by neighbors of his. He bought four houses near his home in Palo Alto, California that were not even on the market, offering absurdly high prices to the homeowners. It seems that Zuckerberg, the CEO of Facebook, that guy we all got to know through his story in The Social Network, is now able to do anything. Let’s face it, if Mark Zuckerberg asked to buy your house, would it really possible to turn him down? The guy who started a website to rate girls based on if they are hot or not, and eventually became the founder of Facebook, is now one of the most successful people in the industry of business.

    Even though the houses were not on the market, Zuckerberg found a way around that, buying one home for $14 million, when the median value in his Crescent Park neighborhood is only about $3 million. He paid $30 million in total for all four houses, located next door and behind his current home. Zuckerberg, the multimillionaire at age 29, acted on impulse after learning that a developer was planning to buy one of the properties next door to him. A source said “The developer was going to build a huge house and market the property as being next door to Mark Zuckerberg.” He certainly did not want to have people coming over and making a spectacle of him, and decided to take action, and avoid the situation.

    The Facebook founder wants to be able to control how the properties around his home are marketed and who they are sold to. He will be leasing the homes back to the homeowners, since he has no problem with who was currently living there, and just wanted to avoid a possibly difficult future situation. He started buying the homes in December of 2012 after learning of the developer’s plans. Zuckerberg is currently worth $19 billion, as of September, according to Forbes. He attended Harvard, but dropped out, and subsequently became one of the most successful entrepreneurs of the internet generation.

    Zuckerberg is married to Priscilla Chan, and the couple also own a house in the Dolores Heights district of San Francisco. Their San Francisco home was bought this year for about $10 million, and is located about three miles from Facebook’s headquarters. They were married in the backyard in May of 2012. The all-star of internet business also made a statement recently, issuing his plan to make the internet more affordable.

    Image via Wikimedia Commons

  • Developer Works Around Google Glass Restrictions With Custom OS

    Google Glass has a lot of potential, but some may feel that Google is squandering said potential with its heavy handed regulation. The company has already banned porn and facial recognition apps from being developed for the device. Before Google can ban more capabilities, one developer wants to free Glass from Google’s reach.

    NPR reports that Stephen Balaban, a developer out of San Francisco, has re-engineered his Google Glass with a new operating system. Unlike Google’s OS for Glass, Balaban’s OS allows any and all kinds of apps. His end goal is to create an OS that “runs on Glass but is not controlled by Google.”

    Balaban is creating the custom OS because he’s already run afoul of Google’s policies when his employer, Lambda Labs, tried to submit a facial recognition app. Instead of just packing up and going home, however, he decided to create a custom OS that would let developers do anything with Google Glass.

    Balaban’s heart is in the right place, but some people are not going to be very happy about it. Google is already dealing with accusations that Glass in its current state is a massive infringement of privacy, and custom operating systems will only make Glass critics even more worried.

    In fact, Congress has even started to ask questions regarding what Google intends to do to protect privacy in the age of Glass. A custom operating system would make Google’s efforts to regulate Glass a moot point, and Congress could take that as an excuse to regulate the hardware. Google certainly doesn’t want that, and the people making custom operating systems don’t want that either.

    We’re still months away from Google Glass’ public debut. Before that, Google can work with its developer partners to find a way for Glass to protect privacy while letting developers go nuts. Whether that means Google relaxing its policies remains to be seen, but it couldn’t hurt.

  • Unity Game Engine to End Flash Support

    It appears that Apple has been right all along that Flash is not the future of the web. Unity this week announced that it will be phasing out support for Adobe Flash development.

    Unity is a multi-platform game engine that is capable of producing games for consoles, PCs, touch devices, and the web. In particular, the engine has been used to create some of the most popular mobile games in recent years, such as Rovio’s Bad Piggies.

    David Helgason, CEO of Unity, announced in a blog post on Tuesday that the company has stopped selling Flash development licenses.

    Unity will continue to support its existing Flash customers “throughout the 4.x cycle.” Bug fixes will be made in future Unity 4.x iterations, but further development for the Unity engine on the platform has ceased.

    The decision was made, Helgason stated, because of Adobe. Helgason called recent versions of Flash unstable and stated that, “We don’t see Adobe being firmly committed to the future development of Flash.” He also pointed out that Adobe has cancelled the Flash Player Next project.

    Instead of Flash, Unity will be concentrating its development on its own Unity Web Player. Helgason stated that the Unity Web Player is installed on over 200 million computers and is used by one-third of all “Facebook gamers.”