Friday, November 1, 2013

Product Tours

"What makes people passionate, pure and simple, is great experiences. If they have great experience with your product and they have great experiences with your service, they’re going to be passionate about your brand, they’re going to be committed to it. That’s how you build that kind of commitment." – Jesse James Garrett

Cloud application developers would all like to think that their user interfaces are intuitive enough for people to understand without external guidance. But in the real world, as the application feature set grows in number and complexity, unassisted discovery and explanation of these features can become a problem.  In complex applications it can be common for technical support to receive an enhancement request for a feature that already exists because the user either couldn't find it, or didn't understand it's purpose. So a primary challenge that all Cloud application developers should take seriously is making sure that people understand how to use their products.  

One approach to helping users understand how to use a product is to provide product tours that enable the user, at their own pace, to discover the product.  Product tours don't have to be monolithic tours of the entire product but can be more granular feature tours or concept tours made available to the user when feature or concept explanation is needed.


Check out this product tour example at easel.ioEasel.io is an online web site designing tool that enables designers to design in a browser which is a sure way to ensure pixel perfect execution.  It uses modern tools like web fonts and CSS3 but doesn't require the designer to shift back and forth between a text editor and the browser.  Other team members, like Product Management, can stay ahead of the next sprint by using the tool to  prototype the next feature.  Easel can can import elements from an existing site so it's possible to make a pretty good clickable prototype and test it all without the help of an engineer. And developers can just export the designer’s actual choices for pixel dimensions and colors and eliminating the work of turning an image into a website.This is a pretty complex design tool but as you can see if you visit their demo, the Product Tour lets the user explore at their own pace, and hand holds the user through actually using the product to build a page. Simplicity may be embedded in your design but until the user "get's it" the user won't be able to use your product effectively.







There are four fundamentals of effective product tours:
  1. A product tour is a journey, not a destination - each step in a product tour should build on the previous step move the user down the path to understanding.
  1. One bite at a time - each step should highlight one important aspect of the product or feature that is easy to digest and understand.
  1. Don't make me think - the tour should take perceived complexity and make it brain dead simple.  After all your  UI is elegant and simple why would the explanation require the user to think.
  1. Short and to the point - the overall tour should be succinct and deliver key points for the user to understand.  Respect the user's time and break up monolithic tours into multiple tours.
There are 2 great product tour libraries for Bootstrap, one is BootStrap Tour and the other is HopScotch.  Each has it's own product tour that explains the features and capabilities so check them out.
There are also other techniques of presenting product tours using slides and videos.  Check out "Seven Exceptional Product Tours, And the Best Practices They Teach Us" by Morgan Brown.








Wednesday, October 23, 2013

Business Continuity, Disaster Recovery, Resiliency

Court disaster long enough and it will accept your proposal.”  Mason Cooley

Independent Software Vendors (ISV's) like any organization must engage in the activities of Business Continuity Planning (BCP) also called Business Continuity and Resiliency Planning (BCRP). This is especially critical for ISV's that provide Software as a Service (SaaS) solutions to their customers. The process identifies exposure to internal and external threats that can disrupt or worse interrupt the operations that are the lifeblood of the business.  Once these risks are identified a recovery plan is developed to return the business back to full operations. Once a recovery plan is in place the business can evaluate, with the knowledge of the risks identified, what hard and soft assets can be applied to prevent a disruption from occurring in the first place, improving resiliency of the business.

Some objectives for the BCRP we listed in our last planning cycle:

  • Identify risks, critical production components and the impacts of their failure.
  • Establish systems to monitor the health of these critical production components.
  • Document recovery procedures to restore critical production components in the event of failure in a time frame that does not breach the customer End User License Agreement (EULA) or Service Level Agreement (SLA). - These recovery procedures also assist in avoiding confusion experienced during an outage.
  • Identify personnel that must be notified in the event of an outage.
  • Create a plan to communicate with key people during the recovery and an escalation procedure.
  • Establish a testing procedure to validate the recovery plans.
  • Establish a process in which the plan can be maintained, updated and tested periodically.
  • Serve as a guide for the IT or Network Services Team.

The BCRP process should be considered cyclical, something that should be executed at least once a year. The BCRP Cycle is composed of 3 main phases (the 3 R's),  Risk Analysis, Recovery or Solution Design and Resiliency or Maintenance. At a more detailed level we can define the BCRP Cycle using the diagram below.
Risk analysis
This phase should identify exposure to internal and external threats that can disrupt or worse interrupt the operations of the company. Establishing 'Severity Levels' is useful in this phase.  For example:


  • Level 1 Disaster Recovery - Severe Outage
    This level is assigned to those risk scenarios where the disruptions are as the name implies disastrous and affect the availability of any component that interrupts operations completely and cannot be fixed at the production site and operations have to be moved to a new location. These disruptions by definition cannot be resolved at the Production Site and will result in instant escalation (chain of notification and approval) o move the Production Site to the Disaster Recovery site, a significant declaration.  Think of a meteorite hitting your data center, or as actually happened this year at a local data center here in DC, a back hoe cutting your data centers internet trunk. Here the potential for EULA/SLA breach is high depending on the fail over time to the new site.
  • Level 2 Operational Recovery - Outage
    This level is assigned to those risk scenarios where the disruptions affect the availability of any component that interrupts operations completely but can be fixed at the Production Site. Rhe recovery plan for these risk scenarios should be well within the time that might lead to a breach of  any customer EULA/SLA.  There will be an escalation procedure in place that will move a Level 2 risk to a Level 1 risk should recovery take longer than expected.
  • Level 3 Offline Recovery - Redundant Outage
    This level is assigned to those risk scenarios where the disruption has no effect on operations, i.e. a pooled web server goes down and the load balancer automatically takes it out of circulation.  The minimal impact of these outages are usually due to built in resiliency, never the less the outage must be addressed to bring the system back to complete health.


Recovery / Solution Design
First it's important to define the two types of recovery planning that were outlined in the Severity Level definitions above:

  • Disaster Recovery
    The process of establishing procedures to recover operations in a location other than the primary production facility after a declaration of disaster.
  • Operational Recovery
    The process of establishing procedures to recover production in the same location and does not require a declaration of disaster.
This phase should produce procedures and identify soft and hard assets to recover from the disruption and bring the business back to full execution after a disruption in both the Disaster and Operation Recovery scenarios.

It is also important to define 2 important numbers which in the end will have a significant impact on the solution design:

  • Recovery Point Objective (RPO) - the acceptable latency of data that will not be recovered (usually driven by transaction volume and speed).
  • Recovery Time Objective (RTO) - the acceptable amount of time to restore operations (usually driven by EULA/SLA financial impact).


Implementation
This phase should establish the necessary monitors, documentation, communication protocols, testing and fail over environments to successfully execute and test the Recovery / Solution Design.  Disaster Recovery will require a fail over operations site be put in place at a different location, this site might be a good candidate to conduct testing and validation of the Recovery / Solution Design rather than jeopardize production.

Testing and Validation
This phase should execute the Recovery / Solution Design through forced disruptions (on the fail over or test platform) which if successful will validate the recovery plans and provide metrics for impact on customer EULA/SLA.  Part of the validation process is to benchmark your Disaster recovery fail over site to make sure it's performance and through put are acceptable, remember your recovering for all your customers, not just a select few.

Maintenance and Resiliency
This phase is a post mortem, what did you learn from the Testing and Validation Phase, what parts in the Recovery & Solution Design had to be modified because they didn't work, this information should update the BCP.  It's also useful here since you have become so educated on your operations, and risks that can impact your operations is to use this knowledge to identify production components that may be candidates for resiliency improvements.  Where these improvements include capital expenses these should be included in the next budgeting cycle using your BCP to make the business case for the expenditure.

Good Luck!




Thursday, October 10, 2013

Moving Legacy Applications to the Cloud - Hybrid Approach with Virtualization

In my last post, "Moving Legacy Applications to the Cloud - Transposition versus Rewrite", I discussed a challenge many ISV's and Corporate IT departments are grappling with today, and that is moving legacy applications to the Cloud. One effective option discussed in that post was the approach of transposing code based on frameworks and tools that are available today versus a monolithic rewrite of the code.  Another option available today is the Hybrid Approach using Virtualization.

The term Hybrid in this case means a cloud application that has two kinds of components that produce the same results as if there were only one kind of component. Specifically a value-add cloud application can be built around the legacy application replacing some of it's parts that leverage the Cloud capabilities while other parts of the legacy application can remain as is and exist in the cloud and be streamed down to the user on demand.  

As an example consider a real world example, a hybrid cloud offering we just delivered to the market we'll call the The Tax Planner for the Web.  Tax Planner is a desktop application that is the #1 market leading product in professional tax planning software. The product was first delivered to the market as a programmed chip for the HP Calculator, yes you read that right.  It was then rewritten to run on the personal computer. It is a very complex application with many of the characteristics of a legacy application that I won't go into here.  The application must be updated each year for new tax legislation and retains the previous years tax calculations, in fact it supports tax calculations going back to 1987.  In addition, numerous spreadsheets or worksheets are used for data entry, basically mirroring the information you would enter into a complete Form 1040, so think Schedule A, C, D etc. The customer being tax professionals and accountants are comfortable with the spreadsheet metaphor for data entry and this is challenging to replicate well in a Cloud application before HTML5. So a monolithic rewrite to move the application to the Cloud to meet market demand is a challenging project.

The solution, a Hybrid Cloud Application that leveraged the Cloud to provide features customers were looking for in a Cloud Application and leveraged the desktop to run the remainder of the legacy application to provide rich and responsive user experience.  The legacy application uses a file metaphor to store clients planning data, so there wasn't the rich Client Management that customers were requesting.  Customers were interested in having backup capabilities that didn't require keeping track of files.  Customers were interested in collaboration and workflow capabilities across their client data. So these were features that made sense moving to the Cloud.  On the other hand clients were not interested in learning a new user interface to enter plan data, think navigating the worksheets that comprise a complete Form 1040.  They were happy with that part of the program.  

To develop this Hybrid Cloud Application we needed a capability to stream down a virtualized version of the shipping program to the desktop on demand.  The overall concept was to develop a native Cloud application that would include login/security, robust Client Management and other value add features we could easily develop for the Cloud and then when the user navigated to the client and clicked on it to open we would stream down the data entry part of the legacy application with the client data.  In fact, the same shipping desktop application would be used in the Cloud application as shipped for our conventional desktop customers, we would just deal with File Open and Save through a new DLL that would make web service calls to the Cloud to display,open and save clients from the Cloud server.

To accomplish the virtualization and streaming functionality we conducted a broad market search for an appropriate technology.  I won't go into all the available vendors but we chose technology from Spoon previously Xenocode developed by Code Systems Corporation, founded by by former Microsoft engineers and researchers. The technology enables application virtualization, portable application creation, and digital distribution. 

We use Spoon Studio which packages software applications into portable applications, single executable files that can be run instantly on any Windows computer.  It only emulates the operating system features that are necessary for applications to run to reduce resource overhead.  Virtualizes portable applications run independently from other software, so there are no conflicts between them and other programs, ie no DLL conflicts.  

We deploy the virtualized application in Spoon Server which we call when the user selects a client to stream the application down to the users desktop.  Spoon provides browser plugins for all popular browsers that handle the client side of receiving and launching the streaming application.  Once launched we create a communication link for our web services back to our Cloud application for further processing.  A couple interesting features here, the initial download is fast but subsequent launching of the application can be almost instantaneous because the plugin is smart enough to determine whether the current sandboxed application is up-to-date, if it is it just launches the application rather than streaming it down again.  This also insures that the application is always the latest up to date version which is critical for Tax Planners.

The Hybrid Cloud application was a big hit with our customers, they loved the cloud features and were amazed they didn't have to learn a new interface.  We were also recognized by the industry earning a finalist position in the CPA Practice Advisor 2013 Innovation Awards.

You can learn more about Spoon on their website and you can try a virtualized application yourself their as well.


Wednesday, September 25, 2013

Moving Legacy Applications to the Cloud - Transposition versus Rewrite


Moving legacy applications to the cloud is an issue many ISV's and Business IT departments are grappling with today.  Although there are a variety of technologies available to create new cloud applications, engaging in a monolithic rewrite of a legacy application in new technologies may not be a viable option both from a cost and time-to-market perspective. In addition, the modern target platforms for the next generation of the legacy application is much more complex than the original target platform when the legacy application was first built.  New target platforms are  characterized by multi platform access, device independence and mobile enabled user interfaces.

There are two options that are worth considering to address this challenge, neither of which requires a rewrite but both can be supported in agile best practices and continuous deployment.

Transposition versus Rewrite

The first option is to approach the legacy application as a real estate developer might approach an old house purchased as an investment.  Although I am not a real estate agent I would assume the questions might be similar to questions we ask when considering the evolution of a legacy application to new target environments.  Is the house in such bad shape that it must be torn down and built back up from scratch, ie is it riddled with termites, is the foundation cracked.  Also combined with this analysis, is there really a budget to build a new house and still be profitable?, was there time factored into the investment to build a new house, or was a quick turnaround built into the investment returns.  Alternatively if the answer is no the house isn't a tear down, then the question is what needs to be done to the house on a more limited time frame and budget to modernize it to have the features that sell in the new real estate market of today?, ie does it need the kitchen remodeled?, should some walls be removed to increase the size of the master bedroom?.

Now lets put these questions in the context of a legacy application.  Do we need to start from a clean slate because nothing is salvageable, ie the application is so buggy and unpredictable with crashes so frequent as to render it useless.  In addition we need to ask if we have the budget to rewrite the application and does the time to market fit in the investment case.  Alternatively if the answer is no, then the question is what needs to be asked what can be done to the legacy application on a more limited time frame and budget to modernize it to operate in the new target environments of today.  And that is the idea of transposition, a unique paradigm that combines concepts from migration, rewrite and virtualization combined with a set of supporting technologies and integrated development environments into a single solution that reduces the time to market and budget required to transpose the legacy application into a new modern application that can run on the modern target environments of today.  The leader in transposition techniques, a company headquartered in Israel - GizMox, refers to transposition as 'computer aided rewrite that reproduces an application that runs on one computing architecture, to an equivalent HTML5 application that will work multi-browser on multiple devices.

The transposition magic is contained in their product Instant CloudMove and Visual WebGui.  Below is the full transposition process.

The Transposition Process
The first phase of transposition is an assessment phase.  GizMox has made available an assessment wizard that you can download and run on the source code of your legacy application. The tool analyzes the source of the legacy application using a virtual compiler to identify flow and dependencies which it uses to build an Abstract Syntax Tree or AST object model. It then employs out-of-the-box syntax translation and mapping of the tree functions to new functions or packages, native to the new target environments where possible to provide metrics . The assessment provides a comprehensive report including:  an automation level assessment and a breakdown of the required resources and packages.
Following this assessment report GizMox offers a more thorough analysis of your application which outlines accurate costs & establishes a detailed work plan including recommended personnel and Time to Market.  Also offered is a free trial of the Instant CloudMove which actually transposes the application so you can develop your own Proof of Concept (POC) or you can contract GizMox to develop a POC based on a representative module of 10,000 lines of code, GizMox will even help you create a representative module from your application..

The Instant CloudMove set of tools transposes most of the original legacy application and user interface code to its new (web-based) environment automatically, approximately 80-85%. The transposition is executed by a sophisticated engine that translates source language into intermediate target language. By transposing into intermediate language the original legacy application code and the target code are isolated from each other, so work can continue on with the original legacy code and be translated back in should a merge be necessary eliminating the need for code freeze of the original source application.  This frees the transposition team to work iteratively at their own pace to deliver the highest quality application.  It's also important to mention that using intermediate code enables a 'push button' generation of code for the desired target environment.  So working the intermediate code new retargeted applications can be generated at any time.

 And because it integrates with Microsoft Visual Studio IDE, it provides the flexibility to customize, upgrade and add pieces of code as you go. There are even sophisticate pattern matching capabilities you can use to create your own mapping packages for the 15-20% of the code that is not automatically transposed.

Using simple drag and drop actions, you can redesign legacy user interfaces to new front-ends such as HTML5 using another GizMox product Visual WebGui (VWG).  VWG extends ASP.NET APIs to support rich user interfaces and incorporates Ajax connections for further enhancements. By targeting VWG, you are practically targeting an enhanced ASP.NET application with an HTML5 user interface.  Once your application is transposed into VWG HTML5, you will be able to inherit part or all of the generated forms and redesign them into mobile and tablet form factors using the VWG Visual Studio integrated designer.

I have been involved in many rewrites of legacy applications for many target environments over the years and these are the kind of tools and translators that each development team would create in-house to 'transpose' the legacy application to run in a new target environment.  In most cases legacy applications have embedded rules and calculation engines and in the extreme cases the engines have a time dimension that is built up over years.  Rewriting these rather than transposing them can be a risky proposition, especially when the customer can't tell the difference.


Here are some tables from the GizMox site that make the case for transposition over standard rewrite.

Comparing Instant CloudMove to a standard rewrite.

Factor
Standard Rewrite
Instant CloudMove
Quality  of code
High
High
Level of automation
Low
High
Time to market
Long
Short
Ability to estimate resources, costs and completion times
Low
High
Built-in capability of adaption to tablet and mobile 'touch' devices
Requires multiple skillset
Built-in drag 'n drop simplicity,
Level of MS Visual Studio integration
Varies
Very high
Risk
High
Low
Cost
High
Considerably Lower




Below is a dollar-for-dollar comparison of smart rewrite against traditional rewrite showing rate of progress and cost per line (LOC)

Source
Rate of Progress in Lines of Code per Developer per Day
Cost per Line $



Cutter Consortium
185
12.3 - 18.5
Gartner
170
N/A
BNA Software
284
N/A
Tactical Strategy Group
50
$15
Forrester
N/A
$6-23
Fedora Linux
N/A
$52
HP
100
$10-30
Gizmox Instant CloudMove
3,000 - 6,000
$0.35 -1.2
Published Figures for LOC a Day and Cost per Line for System Rewrite

Friday, January 25, 2013

How many lives can VB6 have?


The “It Just Works” compatibility for Visual Basic 6 applications will guarantee VB6 lives at least through the full lifetime of Windows 8, 10/01/2023. Visual Basic 6 first shipped in 1998, so that will make it's applications at least 25 years old before they see the end of supported lifetime.  Compare that to applications written for the early version of iOS or the Microsoft .NET Framework 1.0 release in 2002, which is incompatible with Windows 7 release in 2009.    

This seems strange when you consider that all versions of the Visual Basic development environment from 1.0 to 6.0 have been retired and are now unsupported by Microsoft. In fact, it's difficult to find a copy to install in the event you need one.  The required runtime environments are unsupported too, with the exception of the Visual Basic 6 core runtime environment which as stated above is supported through the lifetime of Windows 8.  What's even stranger is that many third party components that developers used to make their applications richer and easier to use are no longer available or supported. 


The main reason Microsoft won't kill the unkillable is that there are too many business applications out in the market that have yet to be upgraded to a modern language and technology.  If they release an OS, like Windows 8, and didn't support the VB6 runtime, many businesses wouldn't and couldn't buy it, because their applications would break. 


Microsoft is shackled by the incredible success of VB6, having to make concessions in how far they can go in technology shifts for a particular OS update.  See the Windows 8 Desktop where all legacy applications run outside the new modern looking Metro Suite.  Microsoft has also made many missteps in handling the VB6.  Rather than bring the VB6 code base along with incremental changes and upgrades to support new versions of the operating system, like Apple does with both iOS and OSX, Microsoft waited and tried to force a complete rewrite with the introduction of .NET, in a different language to boot (VB.NET).  This just doesn't work for an obviously change resistant customer base.  


VB6 is going to be around for a while but it won't be around for ever, 25 years might just might be enough.  Any reasonable business or ISV should consider moving their business applications from VB6 to some other language and technology.  


Forget the VB6 cult perspective I would venture to bet that any application currently in VB6 is running into issues on several fronts.  First it is probably getting harder to find software engineers to work on the application.  Look at this graph roughly representing the popularity of VB6, it looks like the heart monitor of a dying patient.



Click here to visit page...

Second it is probably becoming more difficult to add meaningful features/ui to the application because of the lack of support 3rd party world, practically no one develops or maintains, much less creates new VB6 widgets or components.  Third, your application is not enjoying a favored nation status anymore in the current desktop OS Windows 8.  Sure you kind of blended in through Windows 7, even though the application looked different when it launched, now you launch in a separate  area provided specifically for the legacy applications like you.  Finally the gap between your application and current application technology is starting to look like the Grand Canyon, ie what is your mobile strategy with VB6?


If you are looking for an innovative way to transpose your VB6 applications check out GizMox, I am looking into them now will let you know what I find out.



Friday, January 18, 2013

Look Mom No Coding

Creating Websites without Coding 

There are several sites on the market that enable anyone to create a web site, yes even users that have no idea what HTML is and think java script is a dialog over coffee.  Here's a Clipset on Meaki of some of the most popular vendors.  Recently I put one of these sites to work on a volunteer project.  The goal of the project was to update and expand the usefulness of McLean Youth Athletics Association web site. I am a on the Board of MYA which has the simple but important goal of enabling any youth that wants to play a sport in the McLean Community to be able to play that sport by providing the foundation and support for sports leagues to form and thrive.


The vendor I chose is named Moonfruit . Moonfruit is a UK-based web hosting company. They employ a website construction tool called SiteMaker, which is designed to make website development simple without sacrificing on design or functionality.



The tools is polished and has a rich feature set.  You can start with a wide range of clean, modern predesigned templates and use the drag and drop tools to build your site.




 There are some really nice social features in the tool set and if you know a little about embedding and HTML you can add HTML Snippets which opens up another world of possibilities.

 
A benefit of using a tool like this rather than hand coding it yourself or outsourcing the work is that the tool implements the latest best practices and technology when it publishes your website. You design your site the way you want it and the very qualified engineers at MoonFruit make sure it is published correctly for web, mobile and social targets. That's right, mobile and other form factors.  MoonFruit automatically creates a mobile optimized version of your site,   and can even push your site into Facebook with just a few clicks.

The MYA site isn't finished yet, there may be broken/incorrect links etc., we're still testing and the url is still at MoonFruit,  but it provides an example of what a tools like MoonFruit can help you do in a few hours of work.   Here is the old site and here is the new site built using MoonFruit.

Thursday, February 3, 2011

Health Care and Mobile Devices

When wealth is lost, nothing is lost; when health is lost, something is lost; when character is lost, all is lost.
- Billy Graham




Regardless of where you come down on the Health Care Debate, there is no debating that the advancement in mobile networks and devices will have a significant impact on the Health Care Industry.

Sprint Nextel CEO Dan Hesse made the case at the annual conference of the Healthcare Information and Management Systems Society on March 1.


Today, on a planet of 6.8 billion people, there are more than 4 billion active cell phones—more mobile phones in the world than TVs, PCs and cars combined. The cell phone is the most rapidly adopted technology in the history of this planet... High mobile phone penetration provides an incredible opportunity for us to work together to improve health care and health care access, regardless of location, age, gender or disability.


Mobile devices may be on their way to replacing the characteristic stethscope hanging from every doctors neck or pocket. Today 64% of U.S Physicians use smartphones and is expected to reach 81% by 2012 according to the health care consulting firm Manhattan Research. With the introduction of iPhone 3.0 OS just over a year ago Apple set the stage for increased penetration of the iPhone into the Healthcare industry. Features included in that release made this possible, such as the External Accessories API, allowing external accessories to interface to the iPhone via the dock connector or wirelessly over Bluetooth. Apple used a blood pressure cuff as an example so I guess the stethoscope analogy isn't that far off.

In fact, Apple seems to have always had it's eye on the Healthcare market, partnering with Dr. Geoffrey Rutledge, chief medical officer for Epocrates Inc., to reformat a huge medical database into a downloadable app known as Epocrates RX, before the iPhone was even launched. Today, Rutledge claims Epocrates RX is used by one in five U.S. doctors as a drug reference and to prevent interaction problems between a patient’s multiple medications.

In India, the iPhone is being used to cure a disease called Retinopathy of Prematurity (RoP), an eye disease that affects thousands of prematurely born infants and can cause blindness if not swiftly treated. This type of disease is especially a problem in countries where there is a lack of adequate facilities, long distances to travel, illiteracy and low accessibility to quality healthcare. Laboratory assistants take pictures of the retinas of prematurely born babies and transmit them via broadband to pediatric eye surgeons, many times hundreds or thousands of miles away. The surgeons, use the iPhones high resolution graphics and pinch-and-drag capabilities combined with special software to diagnose and then determine treatment.

Currently there are hundreds of mobile applications in the AppStore's "Healthcare and Fitness" category available to businesses and consumers. Many more applications will be needed by the growing Healthcare Industry. Device manufacturers are improving their operating systems and SDKs to appeal to application developers designing tomorrows Healthcare Solutions.

A few application areas that apply to Healthcare include:

  • Medical Spanish or Medical translation capabilities so doctors can communicate with Foreign language-speaking patients
  • Fast medication facts, alternative medications, multiple drug interaction data, health plan insurance guidelines on medications.
  • Remote diagnostic and treatment recommendations, bringing Healthcare to the patient.
  • Receive and analyze laboratory test results.