Volvo XC40 Unveiled; Will Launch In India In 2018

simplezoom-img

Volvo has showcased the much anticipated XC40 and well, it’s as close to the concept that was showcased before. The XC40 is the smallest and the most affordable XC model in the Swedish carmaker’s line-up and will go on sale in the international markets later this year or latest by early 2018. India, however, will get the new XC40 SUV sooner than expected and the car will launched in the first half of 2018.

Volvo
Volvo Cars
XC60

V90 Cross Country

S90

XC90

V40
S60 Cross Country

S60

V40 Cross Country

S80
Also Read: Volvo XC40: All You Need To Know

“The XC40 is our first entry in the small SUV segment, broadening the appeal of the Volvo brand and moving it in a new direction,” said Håkan Samuelsson, president and chief executive. “It represents a fresh, creative and distinctive new member of the Volvo line-up.

2018 volvo xc40
(The 2018 Volvo XC40 previews more 40 series models that will come from the company)

The XC40 sits below the recently introduced new-gen Volvo XC60 in the company’s model line-up and will share a host of styling cues and features with its older sibling. However, it’s based on a completely different platform than its elder cousins. Unlike the XC90 and XC60 which are based on Volvo’s Scalable Product Architecture (SPA) platform, the all-new Volvo XC40 is built on the company’s Compact Modular Architecture (CMA) platform, which the company shares with its Chinese parent company Geely.
Though not on the same platform, the XC40 will borrow a host of features from both the XC60 and the XC90. Some of the safety features on offer in the Volvo XC40 will be run-off protection and mitigation, cross-traffic alert, auto-brakes, 360 degree camera and a semi-autonomous pilot assist system as well. It comes with a 5-seater cabin layout which has been designed in line with the current Volvo models. So, you’ll get Volvo’s signature vertical touchscreen infotainment system on the dashboard, three-spoke multi-functional steering wheel, comfortable upholstery wrapped in premium leather and more.

2018 volvo xc40 interior
(The 2018 XC40’s interior is inspired from the Volvo’s bigger models)

From the start of production, the XC40 will be available with a D4 diesel or a T5 petrol four-cylinder Drive-E powertrain. Further powertrain options, including a hybridised as well as a pure electric version, will be added later. The XC40 will also be the first Volvo model to be available with Volvo’s new 3-cylinder engine.

2018 volvo xc40
(The new Volvo XC40 is based on the new modular CMA platform)

Volvo claims that the XC40 offers a clutter-free cabin designed with multiple smart and functional storage compartments. For instance, the car’s door pockets have been designed so that they can even hold an average size laptop, tablet, and a couple of bottles. Volvo has moving speakers from the door and developing a world-first, air-ventilated dashboard-mounted subwoofer. The centre console will also feature a smartphone holder, a trash bin, compartment for tissues and more. The XC40 will go up against the likes of the Audi Q3 and the BMW X1.

[“Source-auto.ndtv”]

Martech enablement series: Part 7 — Insights, intelligence and integration

Welcome to Part 7 of: “A Nine Part Practical Guide to Martech Enablement.” This is a progressive guide, with each part building on the previous sections and focused on outlining a process to build a data-driven, technology-driven marketing organization within your company. Below is a list of the previous articles for your reference:

  • Part 1: What is Martech Enablement?
  • Part 2: The Race Team Analogy
  • Part 3: The Team Members
  • Part 4: Building the Team
  • Part 5: The Team Strategy
  • Part 6: Building the Car

In these previous parts, we looked at how your martech team is parallel to an automobile race team. We spent time investigating how a race team constructs their team and then builds a strategy for winning their individual races and the overall race series. We then looked at how this is also a successful approach to constructing and strategizing for a martech team, identifying this process as “martech enablement.”

As we discussed in Part 1 of this guide, martech enablement is ultimately about obtaining insights and providing tools and processes to take action to affect your marketing efforts in your marketing organization. In Part 6, we discussed “building the car” with a focus on breaking down the systems in your martech stack that allow you to take action.

In this article, we will explore the systems that provide insights and enable team collaboration. We’ll also look at tying them all together with integration approaches, tools and strategies. Once again, a shout-out to Scott Brinker for producing the “Marketing Technology Landscape” to help make sense of all the martech products available.

Insights and intelligence

When you’re driving your car, a number of tools inform you how to take action. Looking out your windshield, windows and mirrors gives you immediate data that you respond to. Additionally, you have tools like your instrument dashboard, GPS, traffic data, your radio, and even your passengers.

Race drivers and the team as a whole have sophisticated systems in and around the car that are collecting information, as well as experts to analyze the information in real time, providing actionable insights that the team can use before, during and after the race. This is a huge part of the team’s competitive advantage that they use to win races.

Part of the martech enablement process is to leverage the data within your martech stack so that experts within your team can analyze that information to provide actionable insights, so your marketing organization can win your race.

To reiterate a point made in Part 6 of this guide, a solid data strategy is one of the most important components of martech enablement. This provides the foundation for extracting and “mashing” this data in a way that you can measure. A sound approach is to understand your organization’s KPIs (key performance indicators) and craft a data strategy that supports collecting data to enable measurement of those KPIs.

Many systems and categories of tools assist in the area of gaining insights. Below is a list of some of the systems used to provide visibility and understanding:

  • Web analytics platforms
  • AI/predictive analytics
  • MPM — Marketing performance management
  • Marketing attribution systems
  • Business intelligence (BI) systems
  • Dashboards
  • Data visualization tools
  • Social media monitoring
  • Sales intelligence
  • Audience and market research data

As you progress through the martech enablement process, your “insights” toolset will grow in both size and maturity. I want to remind you to stay focused on letting this part of your stack evolve from the incremental team objectives and series and race goals. Don’t lead with a goal of creating a cool BI environment or dashboard. Let these grow out of the goals driving the martech enablement process.

Strategic vs. tactical insights

I want to spend a minute discussing the difference between strategic and tactical insights and their alignment with your team, series and race objectives. For a refresher on these, see Part 5 of this guide.

When measuring and analyzing performance against your team and series goals, you’re looking at strategic insights where understanding the current level and performance trend is desirable. Think in terms of tools that show you the results of your marketing efforts across time. A tactical insight will generally be more closely aligned with your race goals and will be a singular value or KPI.

Relating this to our race team analogy, a strategic goal could be wanting to improve the team’s average finish position from the current state to some future targeted goal. Over time, you could measure and graph the improvement and trend toward that goal.

A tactical goal might be the desire to come in third place or better in a particular race. Your insight tool could represent that number as a single KPI. That isn’t to say that you may never analyze performance trends during a race, such as average lap speed. But there are values that benefit from analyzing as a trend and others that are just fine to analyze as a current and ending value.

Team management and collaboration

When it comes to management and collaboration in the race team, both pre-race and race-day systems are needed to support the team’s operations. These tools are necessary to get things done right in your marketing organization. Good management and collaboration tools help great people be a great team. Here are some of those systems:

  • Project management
  • Workflow
  • Collaboration tools
  • Business Process Management (BPM)/Agile & Lean
  • Talent management
  • Vendor management
  • Budget and finance

The nuts, bolts, welds, hoses and wires

It’s important to have a strategy and tools to hold all of this together. There are a few strategies to contemplate with systems integration and martech. Your marketing organization will likely take several different approaches to integration. These are generally broken down into three categories: native integration, IPaaS (integration platform as a service) and custom integration.

As technology matures, and the interoperability of products grows, companies are building “connectors” that allow for the exchange of data between their products and other widely used ones. These native integrations generally require some technical implementation or configuration, but the product manufacturers have done much of the heavy lifting to allow for the exchange of data between systems they have connectors for.

IPaaS is a “suite of cloud services enabling development, execution and governance of integration flows connecting any combination of on-premises and cloud-based processes, services, applications and data within individual or across multiple organizations,” according to Gartner. These platforms enable a more systematic way of creating and controlling data exchanges between products in your martech stack.

Custom development is as it sounds: a process in which software engineers develop custom applications to create and manage data exchanges between products and systems in your martech stack. Regardless of whether you take advantage of the aforementioned native integrations or IPaaS, you will likely at some level need to leverage good technologists to do some custom integration work along your path to martech enablement.

Stack it up!

To review, all the categories of the stack between Part 6, “Building the car,” and this part, “Supporting technologies,” your cohesive martech stack is composed of the following types of systems:

Intro to Part 8: Running the series and the races

Now that we’ve gone through the people, the strategy and the stack, we can move on to the execution part of martech enablement. In Part 8 of the guide, we’ll get into how your team iteratively and incrementally moves your marketing organization toward digital transformation and maturity.

I look forward to continuing to share with you about martech enablement in Part 8 of this guide.


Some opinions expressed in this article may be those of a guest author and not necessarily MarTech Today. Staff authors are listed here.


[“Source-martechtoday”]

IBM And NVIDIA Power New Scale-Out Gear For AI

Accelerating deep learning (DL) training – on GPUs, TPUs, FPGAs or other accelerators – is in the early days scale-out architecture, like the server market was in the mid-2000s. DL training enables the advanced pattern recognition behind modern artificial intelligence (AI) based services. NVIDIA GPUs have been a major driver for DL development and commercialization, but IBM just made an important contribution to scale-out DL acceleration. Understanding what IBM did and how that work advances AI deployments takes some explanation.

Scale Matters

TIRIAS Research

Key Definitions

Inference scales-out. Trained DL models can be simplified for faster processing with good enough pattern recognition to create profitable services. Inference can scale-out as small individual tasks running on multiple inexpensive servers. There is a lot of industry investment aimed at lowering the cost of delivering inference, we’ll discuss that in the future.

The immediate challenge for creating deployable inference models is that, today, training scales-up. Training requires large data sets and high numeric precision; aggressive system designs are needed to meet real-world training times and accuracies. But cloud economics are driven by scale-out.

The challenge for cloud companies deploying DL-based AI services, such as Microsoft’s Cortana, Amazon’s Alexa and Google Home, is that DL training has not scaled well. Poor off-the-shelf scaling is mostly due to the immature state of DL acceleration, forcing service providers to invest (in aggregate) hundreds of millions of dollars in research and development (R&D), engineering and deployment of proprietary scale-out systems.

NVLink Scales-Up in Increments of Eight GPUs

GPU evolution has been a key part of DL success over recent years. General purpose processors were, and still are, too slow at processing DL math with large training data sets. NVIDIA invested early in leveraging GPUs for DL acceleration, in both new GPU architectures to further accelerate DL and in DL software development tools to enable easy access to GPU acceleration.

An important part of NVIDIA’s GPU acceleration strategy is NVLink. NVLink is a scale-up high-speed direct GPU-to-GPU interconnect architecture that directly connects two to eight GPU sockets. NVLink enables GPUs to train together with minimum processor intervention. Prior to NVLink, GPUs did not have the low-latency interconnect, data flow control sophistication, or unified memory space needed to scale-up by themselves. NVIDIA implements NVLink using its SXM2 socket instead of PCIe.

NVIDIA’s DGX-1, Microsoft’s Open Compute Project (OCP) Project Olympus HGX-1 GPU chassis and Facebook’s “Big Basin” server contribution to OCP are very similar designs that each house eight NVIDIA Tesla SXM2 GPUs. The DGX-1 design includes a dual-processor x86 server node in the chassis, while the HGX-1 and Big Basin designs must be paired with separate server chassis.

Microsoft’s HGX-1 can bridge four GPU chassis by using its PCIe switch chips to connect the four NVLink domains to one to four server nodes. While all three designs are significant feats of server architecture, the HGX-1’s 32-GPU design limit presents a practical upper limit for directly connected scale-up GPU systems.

TIRIAS Research

Microsoft HGX-1 motherboard with eight SXM2 sockets (four populated)

The list price for each DGX-1 is $129,000 using NVIDIA’s P100 SXM2 GPU and $149,000 using its V100 SXM2 GPU (including the built-in dual-processor x86 server node). While this price range is within reach of some high-performance computing (HPC) cluster bids, it is not a typical cloud or academic purchase.

Original Design Manufacturers (ODMs) like Quanta Cloud Technology (QCT) manufacture variants of OCP’s HGX-1 and Big Basin chassis, but do not publish pricing. NVIDIA P100 modules are priced from about $5,400 to $9,400 each. Because NVIDIA’s SXM2 GPUs account for most of the cost of both Big Basin and HGX-1, we believe that system pricing for both is in the range of $50,000 to $70,000 per chassis unit (not including matching x86 servers), in cloud-sized purchase quantities.

Facebook’s Big Basin Performance Claims

Facebook published a paper in June describing how it connected 32 Big Basin systems over its internal network to aggregate 256 GPUs and train a ResNet-50 image recognition model in under an hour with about 90% scaling efficiency and 72% accuracy.

While 90% scaling efficiency is an impressive achievement for state-of-the-art, there are several challenges with Facebook’s paper.

The eight-GPU Big Basin chassis is the largest possible scale-up NVIDIA NVLink instance. It is expensive, even if you could buy OCP gear as an enterprise buyer. Plus, Facebook’s paper does not mention which OCP server chassis design and processor model they used for their benchmarks. Which processor it used may be a moot point, because if you are not a cloud giant, it is very difficult to buy a Big Basin chassis or any of the other OCP servers that Facebook uses internally. Using different hardware, your mileage is guaranteed to vary.

Facebook also does not divulge the operating system or development tools used in the paper, because Facebook has its own internal cloud instances and development environments. No one else has access to them.

The net effect is that it is nearly impossible to replicate Facebook’s achievement if you are not Facebook.

TIRIAS Research

Facebook Big Basin Server

IBM Scales-Out with Four GPUs in a System

IBM recently published a paper as a follow-up to the Facebook paper. IBM’s paper describes how to train a Resnet-50 model in under an hour at 95% scaling efficiency and 75% accuracy, using the same data sets that Facebook used for training. IBM’s paper is notable in several ways:

  1. Not only did IBM beat Facebook on all the metrics, but 95% efficiency is very linear scaling.
  2. Anyone can buy the equipment and software to replicate IBM’s work. Equipment, operating systems and development environments are called out in the paper.
  3. IBM used smaller scale-out units than Facebook. Assuming Facebook used their standard dual-socket compute chassis, IBM has half the ratio of GPUs to CPUs – Facebook uses a 4:1 ratio and IBM uses a 2:1 ratio.

IBM sells its OpenPOWER “Minsky” deep learning reference design as the Power Systems S822LC for HPC. IBM’s PowerAI software platform with Distributed Deep Learning (DDL) libraries includes IBM-Caffe and “topology aware communication” libraries. PowerAI DDL is specific to OpenPOWER-based systems, so it will run on similar POWER8 Minsky-based designs and upcoming POWER9 “Zaius”-based systems (Zaius was designed by Google and Rackspace), such as those shown at various events by Wistron, E4, Inventec and Zoom.

PowerAI DDL enables creating large scale-out systems out of smaller, more affordable, GPU-based scale-up servers. It optimizes communications between GPU-based servers based on network topology, the capabilities of each network link, and the latencies for each phase of a DL model.

IBM used 64 Power System S822LC systems, each with four NVIDIA Tesla P100 SXM2-connected GPUs and two POWER8 processors, for a total of 256 GPUs – matching Facebook’s paper. Even with twice as many IBM GPU-accelerated chassis required to host the same number of GPUs as in Facebook’s system, IBM achieved a higher scaling efficiency than Facebook. That is no small feat.

TIRIAS Research

IBM Power System S822LC with two POWER8 processors (silver heat sinks) and four NVIDIA Tesla P100 SXM2 modules

Commercial availability of IBM’s S822LC for low volume buyers will be a key element enabling academic and enterprise researchers to buy a few systems and test IBM’s hardware and software scaling efficiencies. The base price for an IBM S822LC for Big Data (without GPUs) is $6,400, so the total price of a S822LC for High Performance Computing should be in the $30,000 to $50,000 ballpark (including the dual-processor POWER8 server node), depending on which P100 model is installed and other options.

Half the battle is knowing that something can be done. We believe IBM’s paper and product availability will spur a lot of DL development work by other hardware and software vendors.

— The author and members of the TIRIAS Research staff do not hold equity positions in any of the companies mentioned. TIRIAS Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud.

[“Source-forbes”]

Why blended learning is future of Indian education

Why blended learning is future of Indian education

The debate around the quality of higher education in India has been gaining momentum since the Union Budget 2017, which laid emphasis on skill development, employability and digitisation of the education process. The government announced a slew of measures, including ‘Swayam’, an online learning portal; revamp of the National Education Policy (NEP); the Higher Education Empowerment Regulation Agency (HEERA) as a single higher education regulator; and the University Grants Commission (UGC) mandate to educational institutions to develop massive open online courses (MOOCs).

While India is making headway in digitising the learning process, world over, universities are disrupting and innovating teaching and learning. The country has a long tradition of face-to-face learning; the teacher or guru cannot be replaced overnight with an unseen, technological entity. However, it is pertinent to note that the gap between what students are taught in classrooms and what the industry is demanding of its prospective employees is growing every day. The rate of change in technology has, and will continue to, outpace the change in university curriculum, the fastest of which takes place once a year. It is not uncommon to see students spending more than 20 years in the education system and saddled with unattractive job prospects.

The solution lies in ‘blended learning’, a concept that is fast gaining pace in the Indian context. In simple terms, it is a hybrid form of teaching and learning which involves both classroom and online learning. The approach mixes concept building and enquiry-based learning which retains human interaction in education and allows students to combine traditional classroom methods with online-digital mediums. Blended learning strives to create a balance between prescriptive learning and learning at one’s own pace. It is important to note here that blended learning is not equivalent to technology-rich teaching; the core of blended learning is giving the student greater autonomy over his or her education growth path, using technology only as an enabler.

Simply put, it is a win-win situation for students and teachers. The emphasis is on development of the learner’s capacity and capability with the goal of preparing him or her for the complexities of today’s changing workplace. Since every individual assimilates information differently, online learning aims to bring greater and better choice of learning with specific interests. Teachers will not be burdened with the mundane task of imparting education through information overload; instead, they will be focusing on higher value-added instruction that synchronises technology with face-to-face learning. The automated and personalised system will allow teachers to turn into mentors, free from the pressures of formal education.

For students, a major advantage is the ability to dip into a knowledge pool that doesn’t end with classroom instruction. Blended learning incorporates information via online courses, developed by experts from different fields, and helping students access globally developed and industry relevant course material. Blended learning creates the possibility of practical, experiential learning, where students can learn at their own pace – both in terms of speed and complexity of information. It is only fair that the education process be flipped to become increasingly learner-driven than prescriptive in nature.

Data analytics from online learning platforms can help educators develop a targeted approach towards teaching a particular individual, harnessing data over time to help students learn better. This will provide teachers more accurate and specific insights into a particular student’s pain points, where he/she is doing well, areas they find most challenging etc. This can help teachers, and by extension colleges and universities, to understand student behaviour better and provide vastly effective learning interventions.

The natural affinity Millennials have to technology, their sense of entitlement to drive their own education, and the fast-paced and fast-changing work environments they are likely to be a part of, all point in one direction — online or computer-based education could well replace brick-and-mortar education in coming years.

Technology also enables students to access a global network of education and knowledge exchange. For instance Anant Agarwal, the CEO of Harvard and MIT’s online-learning platform edX, graduated with a degree from IIT Madras before pursuing a highly successful global career. Blended learning offers a window to a global world for students who might otherwise struggle to access traditional professional education programmes and supplements the wider work of universities, colleges and learning providers.

In short, blended learning aims to solve problems that plague policymakers, administrators and students. While many educators have adopted this unique form of learning, one hopes that in a decade’s time, blended learning becomes the norm rather than the exception. To its credit, the Government of India is formalising the online education space, ensuring regulatory recognition for online courses and encouraging universities to develop their own online curricula.

The blended classroom of the future can leverage the power of online courses and free up classroom time for interactive collaboration and discussion, testing and problem-solving, redefining how education is administered, while at the same retaining the ethos of India’s traditional classroom system.

[“Source-moneycontrol”]