Question: 1
Your network contains three Hyper-V hosts. You add all of the hosts to a cluster.You need to create highly available storage spaces that connect to directly attached storage on the hosts.Which cmdlet should you use?
A. Update-ClusterVirtualMachineConfiguration
B. Enable-ClusterStorageSpacesDirect
C. Set-StoragePool
D. Add-ClusterDisk
Answer: B
Explanation:
The Enable-ClusterStorageSpacesDirect cmdlet enables highly available Storage Spaces that use directly
attached storage Storage Spaces Direct (S2D) on a cluster.
Question: 2
You have an Active Directory domain named Contoso.com. The domain contains servers named Server1,Server2 and Server3 that run Windows Server 2016.Server1 and Server2 are nodes in a Hyper-V cluster named Cluster1. You add a Hyper-V Replica Broker role named Broker1 to Cluster1.
Server3 is a Hyper-V server. A virtual machine named VM1 runs on Server3.Live Migration is enabled on all three servers and it is configured to use Kerberos authentication only.You need to ensure that you can perform the migration of VM1 to Server2.What should you do?
A. Add the Server3 computer account to the Replicator group on Server1 and Server2.
B. Modify the Delegation settings on the Server3 computer account.
C. Modify the Storage Migration settings on Server3.
D. Modify the Cluster permissions for Cluster1.
Answer: B
Explanation:
If you have decided to use Kerberos to authenticate live migration traffic, configure constrained
delegation before you proceed to the rest of the steps.
To configure constrained delegation
Etc.
References:
https://technet.microsoft.com/en-us/library/jj134199(v=ws.11).aspx
Question 3
Your network contains an Active Directory domain. The domain contains two Hyper-V hosts.You plan to perform live migrations between the hosts.You need to ensure that the live migration traffic is authenticated by using Kerberos.What should you do first?
A. From Server Manager, install the Host Guardian Service server role on a domain controller.
B. From Active Directory Users and Computers, add the computer accounts for both servers to the Cryptographic Operators group.
C. From Active Directory Users and Computers, modify the Delegation properties of the computer accounts for both servers.
D. From Server Manager, install the Host Guardian Service server role on both servers.
Answer: C
Explanation:
If you have decided to use Kerberos to authenticate live migration traffic, configure constrained
delegation before you proceed to the rest of the steps.
To configure constrained delegation
Question: 4
You have a failover cluster named Cluster1.A virtual machine named VM1 is a highly available virtual machine that runs on Cluster1. A custom application named App1 runs on VM1.You need to configure monitoring on VM1. If App1 adds an error entry to the Application event log, VM1 should be automatically rebooted and moved to another cluster node.Which tool should you use?
A. Resource Monitor
B. Failover Cluster Manager
C. Server Manager
D. Hyper-V Manager
Answer: B
Explanation:
Do you have a large number of virtualized workloads in your cluster? Have you been looking for a
solution that allows you to detect if any of the virtualized workloads in your cluster are behaving
abnormally? Would you like the cluster service to take recovery actions when these workloads are in an
unhealthy state? In Windows Server 2012/2016, there is a great new feature, in Failover Clustering
called “VM Monitoring”, which does exactly that – it allows you monitor the health state of applications
that are running within a virtual machine and then reports that to the host level so that it can take
recovery actions.
VM Monitoring can be easily configured using the Failover Cluster Manager through the following steps:
References:
https://blogs.msdn.microsoft.com/clustering/2012/04/18/how-to-configure-vm-monitoring-inwindows-server-2012/
Question:5
You have a Hyper-V host named Server1 that runs Windows Server 2016.Server1 has a virtual machine named VM1. VM1 is configured to run the Docker daemon.
On VM1, you have a container network that uses transparent mode.You need to ensure that containers that run on VM1 can obtain IP addresses from DHCP.
What should you do?
A. On VM1, run docker network connect.
B. On Server1, run docker network connect.
C. On VM1, run Get-VMNetworkAdapter –VMName VM1 | Set-VMNetworkAdapter – MacAddressSpoofing On.
D. On Server1, run Get-VMNetworkAdapter –VMName VM1 | Set-VMNetworkAdapter – MacAddressSpoofing On.
Answer: D
Explanation:
If the container host is virtualized, and you wish to use DHCP for IP assignment, you must enable
MACAddressSpoofing.
PS C:\> Get-VMNetworkAdapter -VMName ContainerHostVM | Set-VMNetworkAdapter -
MacAddressSpoofing On
The command needs to be run on the Hyper-V host.
References:
https://msdn.microsoft.com/enus/virtualization/windowscontainers/management/container_networking
Get Microsoft 70-743 Exam Free Study material | Realexamdumps.com
Thursday, 8 August 2019
Thursday, 21 February 2019
Microsoft MCSA 70-743 Practice Test Questions - 70-743 Exam Dumps | RealExamDumps.com
Where do you rate?
According to data from the Cvent platform and the marketing and event professionals who have taken the assessment so far, about 15 percent of programs are at an E1 level, while the vast majority, around 65 to 70 percent of programs, are at the E2 elevated level. About 10 percent of programs are at the E3 level, and only about five percent are at the E4 level.
What gets measured
There are four key areas that help determine what level of maturity any company is at.
First is the organization’s event strategy. What is an organization’s ability to develop a personal, measurable, and data-formed event program that’s aligned to their goals? Does your organization have a deliberate reason for each event they host and attend? Is there a specific reason that justifies why a company hosts every event that they have?
Second is an organization’s ability to execute that strategy efficiently and effectively across the whole event program. Levels of execution vary, from Excel sheets and Google docs all the way to organizations that have full specialized planning teams, access to external resources, the ability to deploy technology to be able to help them organize and execute and market their events and prove impact and ROI.
The next is the attendee experience. This is looking at an organization’s ability to deliver a seamless and personalized event experience to their attendees, and the ability to measure the impact of that initiative. Are you able to deliver an impactful, personalized experience to the attendee? Are you able to measure and track that attendee journey and that attendee experience at your event?
The final pillar is measurement and optimization. This is the organization’s ability to translate attendee and event data into provable value for the organization, through the full life cycle. How is that event strategy put together? How is that strategy executed? What is the quality of the attendee experience that’s delivered on site? After the event is over, how is measurement and optimization done by that organization going forward?
The event evolution model and scorecard
If you’re asking yourself, “How would my organization answer these questions? Where would fall? Are we emergent? Are we experts?” there’s a tool to help you get there.
The event evolution model takes you through 14 to 16 questions: How do you measure KPIs? How do you select events? What type of staffing do you have? What type of technology do you use? How are you measuring and optimizing? How do you look at ROI? The assessment takes about 10 minutes, and the scorecard will rate your event program, offer a write-up of what that level means, and then, for each of the four pillars (How do you make your strategy? How do you execute your event? How do you deliver an attendee experience on site? How do you measure and optimize), you’re offered a practical, actionable tip that can take you to the next level of performance.
There’s a companion ebook that drills down into each of the pillars and every permutation of maturity to give you a quick cheat sheet on what E1, E2, E3, and E4 levels of performance look like across the center of performance and across all of those benchmark levels.
If you’re interested in finding out about your individual organization’s event program, how it measures up, and getting help to chart a course to get your event program up to that next level of execution, head here now for the assessment tool.
According to data from the Cvent platform and the marketing and event professionals who have taken the assessment so far, about 15 percent of programs are at an E1 level, while the vast majority, around 65 to 70 percent of programs, are at the E2 elevated level. About 10 percent of programs are at the E3 level, and only about five percent are at the E4 level.
What gets measured
There are four key areas that help determine what level of maturity any company is at.
First is the organization’s event strategy. What is an organization’s ability to develop a personal, measurable, and data-formed event program that’s aligned to their goals? Does your organization have a deliberate reason for each event they host and attend? Is there a specific reason that justifies why a company hosts every event that they have?
Second is an organization’s ability to execute that strategy efficiently and effectively across the whole event program. Levels of execution vary, from Excel sheets and Google docs all the way to organizations that have full specialized planning teams, access to external resources, the ability to deploy technology to be able to help them organize and execute and market their events and prove impact and ROI.
The next is the attendee experience. This is looking at an organization’s ability to deliver a seamless and personalized event experience to their attendees, and the ability to measure the impact of that initiative. Are you able to deliver an impactful, personalized experience to the attendee? Are you able to measure and track that attendee journey and that attendee experience at your event?
The final pillar is measurement and optimization. This is the organization’s ability to translate attendee and event data into provable value for the organization, through the full life cycle. How is that event strategy put together? How is that strategy executed? What is the quality of the attendee experience that’s delivered on site? After the event is over, how is measurement and optimization done by that organization going forward?
The event evolution model and scorecard
If you’re asking yourself, “How would my organization answer these questions? Where would fall? Are we emergent? Are we experts?” there’s a tool to help you get there.
The event evolution model takes you through 14 to 16 questions: How do you measure KPIs? How do you select events? What type of staffing do you have? What type of technology do you use? How are you measuring and optimizing? How do you look at ROI? The assessment takes about 10 minutes, and the scorecard will rate your event program, offer a write-up of what that level means, and then, for each of the four pillars (How do you make your strategy? How do you execute your event? How do you deliver an attendee experience on site? How do you measure and optimize), you’re offered a practical, actionable tip that can take you to the next level of performance.
There’s a companion ebook that drills down into each of the pillars and every permutation of maturity to give you a quick cheat sheet on what E1, E2, E3, and E4 levels of performance look like across the center of performance and across all of those benchmark levels.
If you’re interested in finding out about your individual organization’s event program, how it measures up, and getting help to chart a course to get your event program up to that next level of execution, head here now for the assessment tool.
Wednesday, 13 February 2019
2019 70-743 actual exam dumps, Microsoft 70-743 Dumps PDF | RealExamDumps
CHANGING THE GAME
When did AI begin to have an impact on the video game business? If you ask Kevin Gammill, partner general manager for PlayFab at Microsoft, he’ll reach back four decades and bring up early computer-controlled opponents such as the flying saucers in Atari’s Asteroids arcade game. “I think AI has been around as long as gaming’s been around,” he says.
Tamil Melamed and Kevin Gammill [Photo: Harry McCracken]
In 2019, AI’s potential applications in gaming go far beyond making decisions for bad guys. Microsoft–the rare company with deep investments in both games and basic computer research–is well positioned to explore them.
That includes useful stuff that makes life better for game players without screaming “Hey, artificial intelligence!” Studies, for instance, have shown that online competition greatly benefits from players being matched with others of roughly comparable skill. “If you go into a game and you just get slaughtered, it’s probably not a good experience,” explains Gammill. “If everyone’s too easy, it’s also probably not a good experience.” Xbox Live has long used an algorithm called TrueSkill (recently updated as TrueSkill 2) to help ensure that contestants are neither bored nor massacred by their opponents.
Another piece of practical AI was inspired by the fact that Microsoft “heard for years loud and clear from gamers that they would greatly prefer to spend a lot more time playing games than downloading games,” says Ashley McKissick, who manages the Game Pass service. The company initially tried to let players skip ahead to the action before a download was complete through a system that required some heavy lifting on the part of game publishers and was therefore not universally adopted.
Starting last summer, Microsoft replaced this unsatisfactory handwork by humans with a piece of AI-enhanced technology called FastStart. It leverages machine learning to determine which bits of a game to download first, allowing gamers to begin playing up to twice as fast. “We’re not really changing the laws of physics here, but it does make your download much smarter,” says McKissick.
Increasingly, Microsoft is formalizing the kind of collaboration that helps AI make its way into games. Similar to the MSR/Office meeting called Roc, a confab called Magneto is designed to cultivate conversation–and outright hacking–between MSR and the gaming group. Along with those two constituencies, “there are people from Bing there, there are people from Windows there, there are people from Azure there,” says Tamir Melamed, Microsoft’s head of PlayFab engineering. “Because there’s a lot of those technologies that we think we can share down the road.”
One joint project emerged from Microsoft’s annual companywide hackathon. In 2017, the gaming group was wrestling with the challenge of curating Mixer, a game-streaming service–in the same Zip Code as Twitch, but more interactive–which Microsoft had acquired in the form of a startup called Beam. “We found ourselves with a much larger volume of streams than we had anticipated,” says Chad Gibson, Mixer’s general manager. “And so we were trying to find, ‘How can we provide new, unique ways of allowing players of PlayerUnknown’s Battlegrounds or Fortnite to be discovered?'”
Chad Gibson and Ashley McKissick [Photo: Harry McCracken]
At around the same time that the Mixer team was asking itself that question, the hackathon was being won by some Microsoft Research staffers who’d devised “Watch For,” an AI system for analyzing live video streams and identifying specific events therein. (Microsoft was so impressed by the technology’s commercial possibilities that it announced the team’s victory without disclosing exactly what it had created.) The two groups collaborated to use Watch For as the basis for HypeZone, a Mixer feature that lets viewers tune in to the most climactic moments in game streams in progress. “It allowed us to do new forms of discovery that we honestly didn’t think were possible,” says Gibson.
As long as gaming has its frustrations, AI should provide further ways to mitigate them. Recently, Gammill was engaged in heated competition in the Tom Clancy first-person shooter Rainbow Six Siege against three friends. Then one contestant’s internet connection choked. “Three of us are running around and a frozen character is standing there,” he says. And a frozen character can’t do much except be mowed down.
A better scenario would be if the game could use AI to determine that a player had gotten cut off, and then take temporary control of the corresponding character–and play in the same style as that person. “Now we’re very close to scenarios like that actually coming to fruition,” says Gammill.
THE SILICON FACTOR
Steve Jobs was fond of saying that Apple was the only computer company that built “the whole widget”–not just software or hardware, but both, integrated so well that the seams of the experience start to fade away. In recent years, that philosophy has reached its ultimate expression as Apple has even designed its own iPhone and iPad processors and optimized them for running Apple software.
The same vertical integration that’s a boon for a smartphone or tablet makes sense, on a grander scale, for a data center–such as the ones that power Microsofts Azure services. Enter Project Brainwave. That’s the name for the custom hardware accelerator Microsoft has designed–using Intel field-programmable gate arrays (FPGAs)–specifically for the purpose of speeding up AI running in the Azure cloud.
Microsoft’s move into designing its own hardware for optimal AI is hardly unique. Both Google and Amazon are also moving down the stack from software to silicon for similar reasons. But Microsoft isn’t just hopping aboard a trendy bandwagon. Project Brainwave is the end product of an opportunity Doug Burger began thinking about almost a decade ago–and at first, he did it on his own. “I started the work in 2010 and then exposed it to management after about a year,” remembers Burger, who was a researcher within MSR at the time.
Project Brainwave sprang from Microsoft’s realization that embracing AI needed to start at the chip level [Photo: courtesy of Microsoft]
Conventional chips know how to execute the computing instructions in their repertoire when they leave the factory, and can never be retrained for a different purpose-such as efficiently running a new machine-learning algorithm. FPGAs, by contrast, are like chameleons, says Burger. “What the FPGAs allow us to do is build stuff really fast and get it into production, and then iterate on a very rapid cadence,” he explains. “So that chameleon is changing colors really fast and getting better every time it changes color.”
FPGA technology allows Microsoft to deliver highly efficient deep learning as a service in a way that addresses specific customer requests. “A lot of the problems that they want to solve are around image analysis,” says Ted Way, senior program manager for Azure Machine Learning. “‘I want to look at my manufacturing defects.’ ‘I want to look at whether [products are] out of stock.’ ‘I want to see if people are smoking at my gas station because I’m afraid of fires.’ Doug’s team was able to turn that around and build these convolutional neural networks that ran super fast on the FPGA in just six months or so.” By silicon standards, that’s quick.
When Burger had begun his personal investigation of FPGAs in 2010, it wasn’t clear–at least to people who aren’t prescient computer scientists–how quickly AI would go mainstream, let alone that delivering it as a service would become a strategic imperative for a company such as Microsoft. Soon enough, Microsoft understood the value his brainchild could bring to Azure. Last July, after Project Brainwave left the lab, so did Burger and his team. Today, they’re continuing their work as part of the Azure group rather than MSR.
Such a segue is not unusual. “One thing about the Microsoft culture now, that boundary between research and product has blurred quite a bit,” says Burger. “The product groups have lots of people who were formerly researchers and are developing new stuff. Research has not only people that do research but engineers building stuff. It’s more of a continuum.” Nadella, he adds, “has done a great job of pushing down this kind of innovation.”
SELF-SERVE SMARTS
With Azure, Microsoft is in a race with Amazon and Google to provide AI and other advanced computing functions to businesses of all sorts as on-demand services. That’s not just good for outside companies; there are also groups within Microsoft that can benefit from pre-packaged AI and machine learning.
Case in point: Codie, a multilingual chatbot designed to provide information about coding. An internal Microsoft experiment for now rather than a commercial product, it sprang from the realization that one major obstacle for would-be software engineers is simply having access to information about matters such as commands in the Python programming language and syntax for SQL database queries. The problem is especially acute for non-native English speakers.
Matt Fisher, senior data analytics manager for Office 365 and Microsoft 365, and one of Codie’s creators, describes the service as “Cortana’s geekier little sibling.” It emerged from the Microsoft Garage, a program that gives employees encouragement and resources as they pursue ideas they’re passionate about, whether or not they fit neatly into official responsibilities. Fifteen staffers with diverse backgrounds were on the team that created the service, including developers, designers, and marketers. It beat 767 other projects to win the company’s Redmond Science Fair, and took second place out of 5,875 entries in the company’s inclusivity challenge.
Afreen Rahman and Matt Fisher [Photo: Harry McCracken]
Using text-based input, Codie answers coding questions by drawing information from Microsoft’s Bing search engine and user-to-user tech advice site Stack Overflow. “In 48 hours we had something that was working across five different spoken languages and pulling from a huge database of information so you could ask it a coding question in Spanish and get a technical answer back in Spanish,” says Afreen Rahman, who works on the Microsoft Store in her day job as a software engineer.
Though Codie’s creators brought a variety of skills to the enterprise, none of them started out knowing that much about AI. “We used out-of-the-box tools available as part of the AI Suite that Microsoft offers,” says Rahman. “And as devs we were able to pick up the documentation in no time and just get going.”
Fisher rattles off the Microsoft cloud offerings that power Codie: “We used everything from the Azure learning service to LUIS language understanding. QnA Maker, the Bing graph, the Microsoft graph, the Azure bot framework, the Azure speech plugin.” There was plenty of Microsoft AI expertise in there; it’s just that it was in ready-to-use form. For Codie–and many other things people want to build–that’s enough.
As an effort to leverage AI as an enabling technology for an inspiring purpose, Codie is already a success. The people who built it are thinking about upgrades–one obvious one would be to let users talk rather than type–and how to make it broadly available. “Our goal is that we would like to see it be used outside of Microsoft’s walls,” says Fisher. “We’re working towards what we need to need to do to get there. We have support from this lovely group, the Garage, but this is our second or third job in many cases.”

REAL PROBLEMS, REAL RESEARCH
One other thing about Microsoft’s new approach to cross-pollinating research and products: It isn’t just the products that benefit. AI has an insatiable appetite for the sort of data required to train machine-learning algorithms. Microsoft, as one of the largest tech companies in the world, has that data, in anonymized form, by the metric ton. Which means that if there was a time when its research efforts benefited from being walled off from money-making businesses that serve actual human beings, it’s over.
“Nowadays, to do a lot of very exciting AI research, you need to get access to real problems and you need to get access to data,” says Shum. “This is where you work together with [product teams]. You build a new model, you train the new model, and then you tweak your new model. Now you have advanced your basic research further. And along the way, you never know–you could get a breakthrough.”
When did AI begin to have an impact on the video game business? If you ask Kevin Gammill, partner general manager for PlayFab at Microsoft, he’ll reach back four decades and bring up early computer-controlled opponents such as the flying saucers in Atari’s Asteroids arcade game. “I think AI has been around as long as gaming’s been around,” he says.
Tamil Melamed and Kevin Gammill [Photo: Harry McCracken]
In 2019, AI’s potential applications in gaming go far beyond making decisions for bad guys. Microsoft–the rare company with deep investments in both games and basic computer research–is well positioned to explore them.
That includes useful stuff that makes life better for game players without screaming “Hey, artificial intelligence!” Studies, for instance, have shown that online competition greatly benefits from players being matched with others of roughly comparable skill. “If you go into a game and you just get slaughtered, it’s probably not a good experience,” explains Gammill. “If everyone’s too easy, it’s also probably not a good experience.” Xbox Live has long used an algorithm called TrueSkill (recently updated as TrueSkill 2) to help ensure that contestants are neither bored nor massacred by their opponents.
Another piece of practical AI was inspired by the fact that Microsoft “heard for years loud and clear from gamers that they would greatly prefer to spend a lot more time playing games than downloading games,” says Ashley McKissick, who manages the Game Pass service. The company initially tried to let players skip ahead to the action before a download was complete through a system that required some heavy lifting on the part of game publishers and was therefore not universally adopted.
Starting last summer, Microsoft replaced this unsatisfactory handwork by humans with a piece of AI-enhanced technology called FastStart. It leverages machine learning to determine which bits of a game to download first, allowing gamers to begin playing up to twice as fast. “We’re not really changing the laws of physics here, but it does make your download much smarter,” says McKissick.
Increasingly, Microsoft is formalizing the kind of collaboration that helps AI make its way into games. Similar to the MSR/Office meeting called Roc, a confab called Magneto is designed to cultivate conversation–and outright hacking–between MSR and the gaming group. Along with those two constituencies, “there are people from Bing there, there are people from Windows there, there are people from Azure there,” says Tamir Melamed, Microsoft’s head of PlayFab engineering. “Because there’s a lot of those technologies that we think we can share down the road.”
One joint project emerged from Microsoft’s annual companywide hackathon. In 2017, the gaming group was wrestling with the challenge of curating Mixer, a game-streaming service–in the same Zip Code as Twitch, but more interactive–which Microsoft had acquired in the form of a startup called Beam. “We found ourselves with a much larger volume of streams than we had anticipated,” says Chad Gibson, Mixer’s general manager. “And so we were trying to find, ‘How can we provide new, unique ways of allowing players of PlayerUnknown’s Battlegrounds or Fortnite to be discovered?'”
Chad Gibson and Ashley McKissick [Photo: Harry McCracken]
At around the same time that the Mixer team was asking itself that question, the hackathon was being won by some Microsoft Research staffers who’d devised “Watch For,” an AI system for analyzing live video streams and identifying specific events therein. (Microsoft was so impressed by the technology’s commercial possibilities that it announced the team’s victory without disclosing exactly what it had created.) The two groups collaborated to use Watch For as the basis for HypeZone, a Mixer feature that lets viewers tune in to the most climactic moments in game streams in progress. “It allowed us to do new forms of discovery that we honestly didn’t think were possible,” says Gibson.
As long as gaming has its frustrations, AI should provide further ways to mitigate them. Recently, Gammill was engaged in heated competition in the Tom Clancy first-person shooter Rainbow Six Siege against three friends. Then one contestant’s internet connection choked. “Three of us are running around and a frozen character is standing there,” he says. And a frozen character can’t do much except be mowed down.
A better scenario would be if the game could use AI to determine that a player had gotten cut off, and then take temporary control of the corresponding character–and play in the same style as that person. “Now we’re very close to scenarios like that actually coming to fruition,” says Gammill.
THE SILICON FACTOR
Steve Jobs was fond of saying that Apple was the only computer company that built “the whole widget”–not just software or hardware, but both, integrated so well that the seams of the experience start to fade away. In recent years, that philosophy has reached its ultimate expression as Apple has even designed its own iPhone and iPad processors and optimized them for running Apple software.The same vertical integration that’s a boon for a smartphone or tablet makes sense, on a grander scale, for a data center–such as the ones that power Microsofts Azure services. Enter Project Brainwave. That’s the name for the custom hardware accelerator Microsoft has designed–using Intel field-programmable gate arrays (FPGAs)–specifically for the purpose of speeding up AI running in the Azure cloud.
Microsoft’s move into designing its own hardware for optimal AI is hardly unique. Both Google and Amazon are also moving down the stack from software to silicon for similar reasons. But Microsoft isn’t just hopping aboard a trendy bandwagon. Project Brainwave is the end product of an opportunity Doug Burger began thinking about almost a decade ago–and at first, he did it on his own. “I started the work in 2010 and then exposed it to management after about a year,” remembers Burger, who was a researcher within MSR at the time.
Project Brainwave sprang from Microsoft’s realization that embracing AI needed to start at the chip level [Photo: courtesy of Microsoft]
Conventional chips know how to execute the computing instructions in their repertoire when they leave the factory, and can never be retrained for a different purpose-such as efficiently running a new machine-learning algorithm. FPGAs, by contrast, are like chameleons, says Burger. “What the FPGAs allow us to do is build stuff really fast and get it into production, and then iterate on a very rapid cadence,” he explains. “So that chameleon is changing colors really fast and getting better every time it changes color.”
FPGA technology allows Microsoft to deliver highly efficient deep learning as a service in a way that addresses specific customer requests. “A lot of the problems that they want to solve are around image analysis,” says Ted Way, senior program manager for Azure Machine Learning. “‘I want to look at my manufacturing defects.’ ‘I want to look at whether [products are] out of stock.’ ‘I want to see if people are smoking at my gas station because I’m afraid of fires.’ Doug’s team was able to turn that around and build these convolutional neural networks that ran super fast on the FPGA in just six months or so.” By silicon standards, that’s quick.
When Burger had begun his personal investigation of FPGAs in 2010, it wasn’t clear–at least to people who aren’t prescient computer scientists–how quickly AI would go mainstream, let alone that delivering it as a service would become a strategic imperative for a company such as Microsoft. Soon enough, Microsoft understood the value his brainchild could bring to Azure. Last July, after Project Brainwave left the lab, so did Burger and his team. Today, they’re continuing their work as part of the Azure group rather than MSR.
Such a segue is not unusual. “One thing about the Microsoft culture now, that boundary between research and product has blurred quite a bit,” says Burger. “The product groups have lots of people who were formerly researchers and are developing new stuff. Research has not only people that do research but engineers building stuff. It’s more of a continuum.” Nadella, he adds, “has done a great job of pushing down this kind of innovation.”
SELF-SERVE SMARTS
With Azure, Microsoft is in a race with Amazon and Google to provide AI and other advanced computing functions to businesses of all sorts as on-demand services. That’s not just good for outside companies; there are also groups within Microsoft that can benefit from pre-packaged AI and machine learning.
Case in point: Codie, a multilingual chatbot designed to provide information about coding. An internal Microsoft experiment for now rather than a commercial product, it sprang from the realization that one major obstacle for would-be software engineers is simply having access to information about matters such as commands in the Python programming language and syntax for SQL database queries. The problem is especially acute for non-native English speakers.
Matt Fisher, senior data analytics manager for Office 365 and Microsoft 365, and one of Codie’s creators, describes the service as “Cortana’s geekier little sibling.” It emerged from the Microsoft Garage, a program that gives employees encouragement and resources as they pursue ideas they’re passionate about, whether or not they fit neatly into official responsibilities. Fifteen staffers with diverse backgrounds were on the team that created the service, including developers, designers, and marketers. It beat 767 other projects to win the company’s Redmond Science Fair, and took second place out of 5,875 entries in the company’s inclusivity challenge.
Afreen Rahman and Matt Fisher [Photo: Harry McCracken]
Using text-based input, Codie answers coding questions by drawing information from Microsoft’s Bing search engine and user-to-user tech advice site Stack Overflow. “In 48 hours we had something that was working across five different spoken languages and pulling from a huge database of information so you could ask it a coding question in Spanish and get a technical answer back in Spanish,” says Afreen Rahman, who works on the Microsoft Store in her day job as a software engineer.
Though Codie’s creators brought a variety of skills to the enterprise, none of them started out knowing that much about AI. “We used out-of-the-box tools available as part of the AI Suite that Microsoft offers,” says Rahman. “And as devs we were able to pick up the documentation in no time and just get going.”
Fisher rattles off the Microsoft cloud offerings that power Codie: “We used everything from the Azure learning service to LUIS language understanding. QnA Maker, the Bing graph, the Microsoft graph, the Azure bot framework, the Azure speech plugin.” There was plenty of Microsoft AI expertise in there; it’s just that it was in ready-to-use form. For Codie–and many other things people want to build–that’s enough.
As an effort to leverage AI as an enabling technology for an inspiring purpose, Codie is already a success. The people who built it are thinking about upgrades–one obvious one would be to let users talk rather than type–and how to make it broadly available. “Our goal is that we would like to see it be used outside of Microsoft’s walls,” says Fisher. “We’re working towards what we need to need to do to get there. We have support from this lovely group, the Garage, but this is our second or third job in many cases.”

REAL PROBLEMS, REAL RESEARCH
One other thing about Microsoft’s new approach to cross-pollinating research and products: It isn’t just the products that benefit. AI has an insatiable appetite for the sort of data required to train machine-learning algorithms. Microsoft, as one of the largest tech companies in the world, has that data, in anonymized form, by the metric ton. Which means that if there was a time when its research efforts benefited from being walled off from money-making businesses that serve actual human beings, it’s over.
“Nowadays, to do a lot of very exciting AI research, you need to get access to real problems and you need to get access to data,” says Shum. “This is where you work together with [product teams]. You build a new model, you train the new model, and then you tweak your new model. Now you have advanced your basic research further. And along the way, you never know–you could get a breakthrough.”
Thursday, 24 January 2019
New 70-743 Dumps with PDF and 267 Questions & Answers
Monday, 2 April 2018
US Seeks End to Supreme Court Privacy Fight With Microsoft On Overseas Data
Some
Justices Urged Congress To Pass A Law To Resolve The Matter
The Supreme Court of the US government UU Battle with Microsoft over whether technology companies may be forced to deliver data stored abroad may be coming to an end, after federal prosecutors asked that the case be dismissed.
On March 22, President Donald Trump signed a provision that clearly states that US judges can issue orders for such data, while offering companies a way to object if the application conflicts with foreign law.
"This case is now debatable," said the US Department of Justice. UU., Citing recently approved legislation, filed in a 16-page court filing that seeks dismissal. On February 27, the Supreme Court heard the arguments in the case, which was one of the most watched of the current period of the Supreme Court.
Some judges urged Congress to pass a law to resolve the matter. Microsoft and the Department of Justice were involved in a dispute over how US prosecutors seek access to data stored on foreign computer servers owned by US companies. The case related to Microsoft's challenge of a national ruling issued by a US judge through emails stored on a Microsoft server in Dublin in connection with a drug trafficking investigation.
The bipartisan new law, known as the Cloud Act, was supported by Microsoft, other major technology companies, and the Trump administration. But civil rights groups rejected that, saying there was not enough privacy.
Microsoft, with over 100 data centers in 40 countries, was the first US company to challenge a domestic search warrant that gathered data outside of the United States. The Microsoft customer whose emails were requested told the company that he was based in Ireland when he signed up for his account.
A representative for Microsoft has not immediately submitted requests for comment on the submission of the Ministry of Justice.
Subscribe to:
Comments (Atom)




