Goodman is home to innovative researchers who are committed to pushing academic boundaries.
Ransomware like Bad Rabbit is big business
October is Cybersecurity Awareness month, which is being observed in the United States, Europe, and elsewhere around the world. Ironically, it began with updates about a large-scale hack, and is ending with a large-scale ransomware outbreak.
Internet firm Yahoo kicked things off on Oct. 3 when it admitted that hackers in 2013 had accessed information about all three billion of its user accounts, not “just” the one billion first reported.
Ransomware “Bad Rabbit” is providing the finale with attacks that began Oct. 24. So far, the outbreak is mostly affecting business computers in Russia.
Both stories are fitting, in a way. The FBI considers computer break-ins and data ransoming the top two cyber threats we face. But while the former is old-fashioned e-crime, ransomware is much trendier. Much like online retailing, online advertising, and online currencies, ransomware is soaring.
Your money or your data
Traditional criminal hackers obtain their ill-gotten gains by stealing valuable data such as credit card numbers or passwords. They then look for customers, such as other criminals, to buy that data.
In contrast, ransomware hackers instead sell data back to the owners. If ransomware infects your computer, it encrypts your files to render them inaccessible until you pay a ransom. This simplifies cybercrime by replacing theft with extortion.
For example, in summer 2016, ransomware locked down the University of Calgary email system. The university paid $20,000 to unlock it.
Today, that looks cheap. In July, a Canadian company reportedly paid $425,000 to regain its data. The month before, South Korean firm Nayana paid $1 million, the highest ransom publicly admitted so far.
Growing scale and sophistication
Much like legitimate firms, some ransomware charges lower “prices” but targets larger volumes. Bad Rabbit demands only a few hundred dollars to decrypt each computer. But it is affecting machines across Russia.
An IBM survey found that almost half of businesses suffered ransomware attacks in 2016. Some 70 per cent of those paid a ransom to regain their data.
The survey also indicates small businesses are particularly vulnerable. They often lack the computer expertise to defend themselves. Only 30 per cent provided cybersecurity training to employees, compared to 58 per cent within larger companies.
Ransomware’s sophistication is growing too. Ransomware “worms” like ZCryptor spread themselves across networks, rather than riding on infected emails.
Some ransomware specialists are selling their services to organized crime. This crime-as-a-service business model allows criminals to outsource their technology needs. User-friendly ransomware “kits” can be purchased for $175.
What might come next? Imagine state-sponsored hackers using ransomware. Host countries might give — or even sell — permission for local hackers to attack rival countries’ computers.
These cyber-privateers could plunder commerce abroad, without the host country’s direct involvement or accountability. Think of regional rivals like North and South Korea, or major powers like the U.S., Russia and China.
Sound far-fetched? Russian security services have already been accused of working with organized crime on cyberattacks. The Russian government denies any involvement. But its president, Vladimir Putin, did suggest independent “patriotic hackers” may have tampered with the U.S. election process.
How about virtual protection rackets? Instead of one-time payments for decryption, users might be “convinced” to pay ongoing fees for the “service” of avoiding encryption.
Or instead of hiding virtual data, ransomware could shut down physical objects. The Internet of Things is exposing new targets. Control systems for factories, utilities and our homes are increasingly online.
What if ransomware turned them off? Businesses begrudgingly pay thousands to recover emails. Imagine what they’d pay to restart assembly lines.
Precautions to take
To defend themselves, computer users need to do the basics. Run antivirus programs to detect threats. Think before clicking on unexpected email attachments. Keep application software and operating systems updated. (Surely you’re not still running Windows XP?)
Users should also back-up files regularly. If ransomware strikes, backups allow ransom-free recovery. But keep them on removable drives to prevent their infection.
Infected users can also try decrypting files with tools from sites like NoMoreRansom.org. But these might work only on simple cases.
Corporate and government action
Software makers should do more to facilitate safe computing practices. For example, it’s great that Windows now has self-updating antivirus protection. Unfortunately, it’s still awkward to back-up data onto removable drives.
Business insurers could also play a role. They might require corporate computers to be updated and backed-up to qualify for coverage.
Co-operation among independent agencies is needed to fight ransomware’s breadth. Canada’s Communications Security Establishment set a good example two weeks ago when it made its Assemblyline malware analysis software publicly available to tech professionals.
In contrast, the U.S. National Security Agency sets a bad example: It had known about a weakness in Windows for years, but didn’t tell Microsoft until early 2017.
Law enforcement likewise needs to cooperate across jurisdictions. September’s Interpol-Europol Cybercrime Conference was a good step in this direction.
As foreign hackers increasingly “tax” domestic businesses, ransomware becomes a national security issue. Governments may need to negotiate agreements like those covering seaborne piracy.
Finally, firms might consider keeping key systems disconnected from the internet, as some military computers have always been. Just because anything can be online, it doesn’t mean everything should be.
The Charge of the Light Brigade happened 163 years ago, but historians still debate who was to blame for the military fiasco. William Simpson
Could the Charge of the Light Brigade have worked?
Middle East tensions. Russian soldiers in Crimea. Western nations’ warships in the Black Sea. Those descriptions sound like Russia’s 2014 takeover of Crimea.
But they also applied 150 years earlier during The Crimean War between Russia and a British-French-Turkish alliance. That war is largely forgotten now, apart from its famous nurse Florence Nightingale.
However, another of its features also remains in our memories: The Charge of the Light Brigade. That was a small engagement that ended the inconclusive Battle of Balaclava on Oct. 25, 1854. But it became infamous for its brave soldiers, incompetent leaders and senseless bloodshed. It quickly inspired a magnificent poem by Lord Tennyson and later a colourful movie.
The ‘Valley of Death’
During the charge, Lord Cardigan’s light cavalry brigade attacked Russian cannons in “the valley of death.” The brigade defeated the gunners, but was counter-attacked by roughly 2,160 Russian light cavalry. It lost 469 of its 664 cavalrymen. Outnumbered 11-to-1, the 195 survivors retreated.
The British leaders immediately blamed each other for the fiasco.
The British army commander, Lord Raglan, had issued notoriously vague orders to his cavalry commander, Lord Lucan: “Lord Raglan wishes the cavalry to advance rapidly to the front, and to try to prevent the enemy carrying away the guns.”
But which cavalry: the Light Brigade alone or the Heavy Brigade too? Which guns: those in the valley or those on the adjacent Causeway Heights?
The Light Brigade rode smaller, faster horses. In battle it typically charged enemy troops who were disorganised or retreating. The Heavy Brigade had larger, stronger horses. It could overpower lighter cavalry or charge against infantry lines. Either unit could charge cannons, but normally from their defenceless flanks, not head-on into their gunfire.
Raglan complained the Lucan had ineptly misinterpreted his orders. The charge was supposed to target Russian cannons on the heights, not in the valley. Lucan in turn complained that Raglan’s orders had been unclear and unwise.
For his part, Cardigan complained the Heavy Brigade should have charged too, to support his men. That brigade actually had started to advance. But Lucan halted it once he saw the cannon fire’s intensity.
The leaders’ bickering ignited two ongoing historical debates. Which leader(s) deserved the blame for the disastrous charge? And could it have succeeded if it had followed one of the other alternatives?
Using math to analyse the battle
This study is an example of “digital humanities” research. It uses math and computers to investigate a humanities topic. Other examples include studies of the 1863 Battle of Gettysburg and the 1942 Battle of the Coral Sea. In those projects I likewise collaborated with historians to get results that neither of us could have obtained on our own.
For our project on the Battle of Balaclava, we initially calibrated the model with historical troop strengths and losses. This ensured it reproduced the actual charge by the Light Brigade along the valley.
We then adjusted the model to represent three alternative charges: the Light Brigade against the heights; both brigades against the heights; and both brigades along the valley. For each alternative, the model estimated the British losses and survivors.
Bad odds under all scenarios
For example, suppose the Light Brigade had charged the cannons on the heights. Our model estimated British losses would have been 19% higher than the historical ones. The 106 survivors would have been outnumbered 41-to-1 by the 4,400 Russian infantry and cavalry there.
Next, suppose instead that both brigades had charged the heights, as Raglan had intended. British losses would have been 51% higher. The 661 survivors would have been outnumbered 7-to-1.
Finally, suppose both brigades had charged along the valley. British losses would have been 22% higher. The 794 survivors would have been outnumbered 3-to-1.
These results have several implications. First, any of the charges would have overrun the targeted guns. The challenge was to also defeat the Russian troops behind them.
Second, all the alternative charges would have increased Britain’s already-high losses. The historical charge Lucan executed was the “least bad” by that measure.
Third, Raglan’s intended charge by both brigades against the heights would have been the worst. That scenario has the highest losses and too few survivors to beat the Russian soldiers. It’s fortunate that Lucan misunderstood his orders.
Most intriguingly, the charge that Lucan started but then half-cancelled is the only one that might have worked. Sending both brigades along the valley would have put the most survivors into melee and at the best odds.
Fighting while outnumbered 3-to-1 would have been tough. But earlier that day, the Heavy Brigade had defeated the lighter Russian cavalry despite being outnumbered 2-to-1 and attacking uphill. Aided by their momentum, a charge by both brigades might have won again.
These results matter because a successful charge could have turned the battle into a Russian defeat. That might have discouraged Russia’s later attack at Inkerman and thereby hastened the allied siege of nearby Sevastopol.
Conversely, an even worse charge might have led to a decisive Russian victory. They could have captured Balaclava’s port and forced the allies to abandon the Sevastopol siege. This could have allowed Russia to win the war.
As it was, the battle was only a minor victory for Russian. It made the allies’ siege more difficult, but didn’t stop it. They captured Sevastopol 11 months later, after heavy casualties on all sides.
That capture eventually forced Russia to surrender by signing the Treaty of Paris in 1856. Alas, the treaty settled very little. It instead led to new rivalries and more European wars in subsequent decades.
Their’s not to make reply,
Their’s not to reason why,
Their’s but to do and die:
Into the valley of Death
Rode the six hundred.
from the poem The Charge of the Light Brigade by Alfred Lord Tennyson
How well do students perform when retaking courses?
Ah, September is almost here. A new school year beckons, with new courses, new books and new students.
Except … some of the students are not new. They are retaking courses they had previously failed, or barely passed. They may be doing that to boost their marks, qualify for advanced courses, maintain scholarships or just stay in their degree programs.
Repeating, unfortunately, is not unusual, especially for first-year university courses. First year is especially challenging because students must adjust to the differences between high school and university.
Despite being widespread, little is known about course repeating. On their second attempt, do students score higher, lower or about the same as before? How do they compare to first-time course-takers? Does it matter whether they originally had failed or just scraped by? Which students have the best chance of success when repeating?
How much do they improve?
We initially examined 232 repeat attempts in first-year courses with high repeat rates, such as calculus, economics and accounting. These repeats involved 116 students, each retaking between one and five courses. In 58 per cent of these cases, the student had originally failed. In the other cases, the student had barely passed.
In this sample, the average grade on students’ original attempts was 44 out of 100, versus an average on their repeat attempts of 60. (At our university, 50 or more is a pass.) This means repeaters improved their grades by 16 marks on average.
However, the degree of improvement varied widely. It ranged from a decrease of 45 marks to an increase of 65 marks. About nine per cent of the repeat grades were lower than the originals.
Who benefits most?
We found that students with the highest original grades tended to get the highest grades when repeating. And students with the lowest original grades tended to get the largest increases from repeating.
The student’s original course grade and their overall university average were both good predictors of their repeat grade. For example, suppose a student had barely passed on the original attempt and had good marks in their other courses. That student would likely do well when repeating.
Conversely, consider a student who had failed badly in their first attempt, and had barely passed their other courses. That student would likely score poorly when repeating.
Repeaters versus first-timers
Our follow-up research examined 931 student grades in first-year economics courses and 665 in second-year finance courses. This time we sorted repeating students into two groups: those who had originally failed and those who had originally passed. We also included grades of students taking the course for the first time. This let us compare repeaters to first-timers.
We found that repeaters who had originally passed earned the highest grades on average. They seemed to benefit from their previous course experience. First-timers had the next highest grades on average.
Students who had originally failed the course earned the lowest grades on average. It seems they mostly continued to suffer from the problems they experienced during their original attempt.
Interestingly, their grades were also more varied than those of the other two groups. While most of the original failers did poorly, a few did very well. They somehow overcame their earlier difficulties in the course.
Advice for students and advisers
These results imply two suggestions for students thinking of retaking a course. First, repeating is more likely to succeed if their original grade is not too low and their other course grades are good.
Conversely, students with very low original grades and weak marks in other courses may not find repeating worthwhile. They may be better off taking a different course, or even a different degree program.
Second, the wide variation in outcomes suggests that students should thoughtfully prepare for repeat attempts. Simply continuing their original behavior and hoping for “better luck” the second time is unwise.
Instead, students should make changes to improve their odds. They could free up more studying time by spending fewer hours at part-time jobs or on sports teams. They might also benefit from attending learning skills workshops or joining study groups.
This research is part of a larger program to help individual students make better decisions about their studies. We want to help them learn their material better, earn higher marks and ultimately graduate.
By helping individual students this way, our results should also benefit universities and the governments that fund them. Increased student success at repeating courses is another small way to reduce drop-out rates and boost graduation rates. Those are increasingly important goals on many campuses and in many countries.
Missile countermeasures: North Korea’s threat, Israel’s experience
North Korea’s nuclear weapons and ballistic missiles have been making headlines again. There’s also been serious controversy over how the United States and other countries should react to that threat.
To that end, it might help to examine Israel’s experience in dealing with actual rocket attacks. Some of my research has explored the properties of its interceptor systems.
Israel is a leader in missile countermeasures because of its neighbours. It has experienced rocket fire for more than a decade from Hamas militants in Gaza and Hezbollah militants in Lebanon. The country hasn’t forgotten Iraq’s 1991 Scud missile strikes. It also worries about future attacks from Syria and Iran.
Israel has consequently developed a set of countermeasures that provide it with a layered defence:
One way to guard against missiles is to prevent hostile countries from getting them. Trade sanctions and military blockades can assist this.
Israel restricts trade into Gaza for this reason. However, Hamas responds by smuggling rockets inside other shipments. It also bypasses the blockade by producing Qassam rockets locally.
We can try to discourage the use of missiles by threatening to retaliate with our own weapons. But success depends on the opposing leader’s goals. Some might risk or even welcome retaliation.
For example, Israel has deterred Hezbollah from firing rockets for 11 years. But Hamas has not been deterred, as it benefits politically from occasionally provoking Israel.
Deterrence is the default solution regarding North Korea, as it was during the Cold War. It may be what U.S. President Donald Trump meant by his “fire and fury” comments. If North Korea ever uses nuclear weapons, it will invite massive U.S. retaliation and likely end the regime’s rule.
The direct military solution to missiles is to destroy them on the ground. This counterforce approach assumes the missiles can be located and effectively attacked. It also risks collateral damage against civilians and diplomatic repercussions with other countries.
Israel conducts clandestine airstrikes against selected Hezbollah missile shipments for this reason. It destroys rockets on a larger scale during its operations against Hamas. One challenge is the large number of rockets. Israel has destroyed thousands, but thousands more remain. Another problem is rockets being stored in civilian areas like schools. Collateral damage there is unavoidable.
Donald Trump’s “military options” presumably include pre-emptive strikes. There are “only” about 60 North Korean nuclear warheads. But the U.S. would not want to miss even one. They may be hard to locate in that highly secretive country. They also may be well-sheltered and difficult to destroy without causing heavy civilian losses.
Once missiles have launched, defenders may try to shoot them down. Such interceptions are difficult to achieve, but make spectacular videos.
The U.S. also has several interceptor systems: Ground-Based Mid-Course Defense, Terminal High Altitude Area Defense, Aegis (Standard) Ballistic Missile Defense and Patriot. Patriot has limited combat experience, while the others have none. Their effectiveness against North Korean missiles is uncertain.
Even good interceptors aren’t perfect. Iron Dome cannot engage every incoming rocket, and sometimes it misses those it engages. Fortunately, the rockets have small warheads. The U.S. would likely face only a handful of North Korean nuclear warheads. But missing even one could be devastating.
How attackers try to avoid detection
There are also ways for attackers to avoid interception. The simplest is to overload the interceptors by firing many missiles at once. Hamas has been unable to do this against Iron Dome but Hezbollah has enough launchers to try.
North Korea may have too few nuclear missiles to overload U.S. systems. But it could succeed with its numerous conventional missiles.
Attackers might also try fooling interceptors into chasing non-threatening missiles or ignoring threatening ones. The former wastes interceptors, while the latter lets missiles through. Possible methods include jamming, decoys or manoeuvrable warheads. This approach is probably not worthwhile for rockets fired at Israel. North Korea, however, could develop decoys to accompany its warheads.
An attacker could instead try shooting missiles at an interceptor system to destroy it. However, such counter-battery fire uses up valuable ammunition. Hamas artillery rockets are too inaccurate to make this trade-off profitable. Hezbollah could attempt it with guided missiles or armed drones. North Korea would not waste nuclear warheads against interceptor systems but might shoot conventional ones at them.
Finally, a country can reduce its missile casualties by preparing civil defences. These include warning systems, bomb shelters and emergency response units.
Israel’s warning system is called Red Color. Speakers, sirens and cell phone apps alert civilians of incoming rockets. The country has built concrete shelters in locations like playgrounds and bus stops, as well as in private homes. However, there is concern that attackers might someday use poison gas warheads to bypass these shelters.
The U.S. could revive its civil defence program, starting in Guam and Hawaii. The priority should be the warning systems. Shelter needs depend on the warheads. Even ordinary basements or concrete buildings would help somewhat against conventional explosive warheads. More sophisticated shelters and decontamination gear would be needed against nerve gas or nuclear warheads.
Israel’s countermeasures act together to complement each other’s strengths and weaknesses. Collectively they may have prevented thousands of casualties and millions of dollars of damage.
To achieve that, the country has spent billions, including over $3 billion of U.S. aid. That’s a lot to protect a small territory containing fewer than nine million people. The cost to defend the U.S. or its allies would be far larger.
Online shopping: Retailers seek visibility in face of Google control
Customers often find retailers online using Google. For example, type “laptop” into the wildly popular search engine, and you will quickly see web links related to laptop computers. Some of those are “sponsored links,” also known as retailer ads. Those retailers paid Google to display their links in searches for that keyword.
This sponsored search advertising is popular with retailers and provides much of Google’s revenue. The tech behemoth took in some US$24 billion in 2016 from the United States alone — about 76 per cent of the country’s search ad market.
That popularity means it’s important for online retailers to understand the advertising process. What factors help links appear first on the page? Are some retailers better at this critical competition for visibility?
Furthermore, some ads are for Google’s own retail site. Does that matter? Should we be concerned that Google has several ways to influence which ads we see?
Competing for visibility
Search advertising requires many decisions. Should retailers sponsor just a few keywords, or many? Which ones should they choose? How much money should they offer to pay Google for each word?
These decisions matter because customers click more on links near the top of the page. The first link displayed can have double the “click-through” rate of the second one.
Research at our school, The Goodman School of Business at Brock University, found that some firms are much better than others at managing this visibility challenge. Among online-only retailers like Amazon, the greatest differences are in search rankings. The best firms get their ads nearer the top at relatively lower costs.
Among retailers like Staples that are multi-channel (meaning they also have bricks-and-mortar stores), the biggest differences instead are in the rates that consumers click on links and buy products. The best firms get more clicks and more sales per ad dollar. Surprisingly, multi-channel retailers tend to be more efficient overall than the online-only ones at search advertising.
A second study examined how popularity, payments and competition affect rankings. Not surprisingly, popular retailers tend to rank highly in search results. Retailers offering to pay Google more per word also rank higher.
Payments to Google sway rankings
Those payments have more influence on search rankings for obscure web pages than for popular ones. Popular sites tend to rank highly regardless of the payment they offer Google.
When multi-channel retailers like Staples face few competitors for keywords, popularity and payment have a large impact on their rankings. But when there’s a lot of competitors, multi-channel retailers tend to fall down the page. They seem to rely more then on their physical stores and less on the web.
Online-only retailers don’t have that option, and seem to sponsor links regardless of the competition. Popular online-only retailers generally appear high on the page no matter the number of competitors. Similarly, competition seemingly does not affect the payments they offer.
(It will be interesting to see how this distinction evolves as some online-only retailers add brick-and-mortar stores, as with Amazon’s takeover of Whole Foods. Or as traditional retailers move increasingly online, as with Walmart’s purchase of online men’s retailer Bonobos.)
Google’s wide reach
Google also places ads elsewhere in its AdSense network, such as on the web pages you visit. It chooses those ads partly by tracking your previous searches. For example, if you searched for “laptops” earlier, you’ll likely see ads for laptops on subsequent websites.
Google also places ads in its free version of Gmail. It chooses those ads in part by scanning your emails’ contents. If you’ve been exchanging messages about laptops lately, you’ll likely see ads for them there. Google recently announced plans to end that scanning, however, due to the confusion it’s caused for its paid Gmail users.
The company influences online ads’ visibility in other ways too. Its search engine prefers sites that work well on mobile devices like smart phones. Its Chrome web browser will soon block annoying ads that pop-up on the screen or that auto-play noisy videos.
Google has a lot of control over what we see online. It’s one of the ethical questions that all search engines raise. An added concern here is that Google sells products through its own Shopping site. That means it competes with companies that it advertises, all while controlling whose ads are easiest to see.
When European Union regulators investigated this potential conflict of interest, they found that Google Shopping links tend to appear much higher in Google searches than those of rival marketplaces. The EU is therefore fining Google’s parent company, Alphabet, US$3.6 billion for antitrust violations.
Another EU probe
The EU is also investigating the company for requiring other search sites to show Google ads. A third inquiry is examining the firm’s insistence that all Android devices give priority to Google over other search sites. More fines could follow.
Google is involved in yet another controversy this week over its funding to law professors whose research favoured the company’s practices. That funding was not always disclosed.
Google argues that it simply gives consumers what they want. Nonetheless the allegations suggest that Google may have forgotten its onetime “Don’t be evil” motto. It’s worth watching whether this ethical and legal controversy will harm either its brand image or its profits.
How brands turn customers into devoted followers
Many consumers like the products they buy, but some people go beyond liking. They actively advocate for the companies and concepts behind those products.
Think of Apple Inc. and its trendsetting iPhones, celebrating their ninth anniversary in Canada on July 11. The phones are certainly high-quality. But many consumers, bloggers and media critics have also long raved about the firm itself and its overall design approach. Those “evangelists” don’t work for Apple, but voluntarily endorse it and its entire product line.
By comparison, other cell phone companies rarely inspire such devotion. People might like individual phone models, but don’t connect much to the manufacturer behind them.
Many firms would love to see such enthusiasm among their customers, reviewers and retailers. But how can they create these external evangelists?
A study at our business school explored this question by examining Ontario wineries. Most of these wineries are small and have limited marketing budgets. Nonetheless, they have instilled devotion among commentators and consumers for their cool-climate wines.
Akin to religious conversion
This success is partly due to grape and wine quality improvements over recent decades. But high quality alone is not enough to inspire evangelism.
The researchers started by observing customer-related activities at the wineries. They also interviewed people who were external to the wineries but involved with their products. Examples include reviewers, retailers, and restaurateurs. The study examined how their views and relationships with wineries evolved over a five-year period.
The research revealed that evangelism develops much like religious conversion (perhaps fittingly, given wine’s historical association with religion).
The process begins when wineries host events involving customers and other participants. Examples include vineyard tours and wine tastings.
These events proceed like religious rituals. They involve participants in ceremonial procedures, like the sequence of steps involved in wine tasting. They feature symbolic objects, like the special glasses and descriptive labels used with different wine types.
These rituals also include evocative storytelling and social interaction. One set of rituals emphasizes the guilty pleasures of enjoying wine. A second set emphasizes the wines’ history and production methods. They describe the traditions of their family and region. Often, they trace their history back to the “old country” in Europe. A third set of rituals emphasizes the prestige the wineries have garnered as they increasingly produce world-class wines.
During these rituals, some participants experience emotional responses. They feel joy at being part of pleasurable events with other happy people. They are impressed by the complexity of wine. They find it eyeopening to learn how to taste the differences among various wine types. They also admire the effort that winemakers put into their craft in pursuit of perfection.
Those responses connect participants emotionally to each other, to winemakers and to the wine itself. These people become evangelists. They subsequently promote wines and the wine-making region. For them, wine has become more than what’s in the bottle.
Interestingly, certain participants are more prone to such conversion. For example, some people see themselves as “foodies” or “wine aficionados.” Others strongly identify with Ontario or its wine-making regions. Both groups are more likely to react emotionally and become evangelists.
Not everyone buys in
Conversely, evangelism is less likely among those who see themselves as simple consumers. They just want a tasty wine at a reasonable price. It also occurs less among people with strong professional identities, like quality control inspectors. They see the rituals as mere marketing exercises.
You can see similar elements in play with Apple. Its consumers and promoters are often described as cult-like. Its ethereal stores are like shrines, and Steve Jobs was its charismatic but demanding high priest (a segment in The Simpsons television show highlighted this quasi-religious view). Some Apple ads don’t even mention its products. Instead, they emphasize its role in customer lifestyles.
The research also suggests that evangelism is more likely when firms provide authentic experiences for participants. This authenticity helps create the emotional responses and mutually supportive relationships. The desired responses won’t occur if consumers view the rituals as artificial add-ons.
To support this process, some tech companies now have “Chief Evangelists” alongside their marketing departments. They serve as ministers to promote their companies’ practices to external audiences.
Size is no guarantee
Businesses don’t need to be large or wealthy to inspire evangelism. The key is the ability to create authentic relationships. For example, Canadian whisky distilleries are generally much larger and wealthier than Ontario wineries. But those distillers have not been nearly as successful at inspiring evangelism.
However, even successful evangelism does not guarantee enduring success. Look at Blackberry, previously called Research In Motion. Its email-oriented devices with their physical keypads once inspired such devotion among business professionals that they were dubbed “crack-berry” addicts.
But Apple’s touchscreen iPhones expanded smartphones’ appeal beyond business email. They attracted more customers and enabled more uses. The new religion consequently overtook the old one. And now, it’s Apple that faces threats from the ongoing evolution of technology.
Memo to Gordon Gekko: Ethics, not greed, boost profits
Stories involving business ethics appear regularly in the news. Some report good deeds, but most allege scandalous corporate behaviour. While these may seem like examples of businesses choosing money over morals, that’s a false choice. Unethical behaviour is not only embarrassing from a public relations standpoint, it can also be unprofitable for firms and their investors.
One ongoing scandal, both in Canada and the United States, involves banks selling unwanted financial services to their customers. The story began south of the border with Wells Fargo. The bank admitted in September 2016 that thousands of its employees had created more than two million accounts without customer permission.
The problem arose after management set overly aggressive sales targets. Employees felt pressured to open accounts regardless of customer need. Some ex-employees claim they were fired for refusing to play along.
Regulators fined Wells Fargo US$185 million and its CEO resigned, but the fallout continues. The bank cancelled executive bonuses and demoted several managers. A customer lawsuit is seeking compensation of US$142 million. Lawyers speculate that employees may have created 3.5 million unauthorized accounts.
Canada’s banks face similar allegations
In March, CBC News began reporting an eerily similar Canadian story. Hundreds of TD Bank employees complained about unrealistic sales targets. They too felt pressured to sell needless services. Similar stories emerged from other banks and even credit unions. A parliamentary committee is set to investigate this month.
Dubious ethics are not limited to banking. Investors in Trump Hotel in Toronto won a lawsuit last fall against its developer, Talon International, and its manager (a Donald Trump company). The investors claimed they had been deceived. Instead of earning handsome profits, they had suffered losses. The hotel went into receivership, and creditors took it over in March.
Trump “University” in the U.S. also faced allegations of deception. Its seminars promised real estate investment “secrets” from Trump “experts”. Its students sued because the seminars apparently involved neither secrets nor experts. New York’s attorney general called it “straight-up fraud” and sued too. Trump eventually agreed to reimburse US$25 million. A judge approved the settlement in March.
Greed is good? Not really
These businesses seemingly followed Gordon Gekko’s “greed is good” mantra from the film Wall Street. They’re doing what’s profitable rather than what’s right. In this context, investors might think they have no choice but to tolerate bad behaviour to get good returns.
However, unethical behaviour does not merely disgust the public and result in bad customer service. A study here at the Goodman School of Business revealed that it also harms companies and investors. Brock University’s Institute for International Issues in Accounting funded the study.
The probe examined 541 multinational corporations over a three-year period. It compared their financial performance, stock market returns and ethical reputations, including how companies treat their employees, customers and communities.
Perhaps surprisingly, the research revealed that investors by default expected decent, though not perfect, corporate ethics. Perhaps tellingly, investors held lower expectations of American firms than of firms elsewhere.
Share prices rise when ethics in play
More importantly, companies with improving ethics tended to have clearer financial reports and better financial performance. Conversely, transparency and profitability suffered when reputations fell.
Those reputational changes also affected investors. Share prices rose an average of 1.1 per cent within just three days after ethical ratings improved. Conversely, they fell 1.6 per cent when reputations dropped.
Put simply, when corporations paid attention to ethics, their finances improved. When investors paid attention to corporate ethics, their returns improved.
Consider some concrete examples. TD’s stock fell 5.5 per cent the day after news broke about the bank’s aggressive sales tactics. That cost shareholders about $7.2 billion.
At Wells Fargo, share prices slid for weeks, dropping 13 per cent over the month of September. That’s more than $30 billion lost by investors. Another month passed before the stock began to recover.
Bad ethics isn’t always due to sales targets and Gordon Gekko-ish bosses. Sometimes a company’s operating practices encourage it.
Analytics should benefit customers
For example, consider banking software that statistically analyzes customer data. It can prompt employees about financial services to offer each customer. If the only goal is to boost bank fees, then such data analysis becomes a “weapon of math destruction” against customers.
However, data analytics is not inherently evil. It can be mutually beneficial.
Operations researchers recently helped redesign a Turkish bank’s investment sales software. Now the computer only suggests new products that increase customers’ investment returns or reduce their portfolio risks. This win-win approach benefits both the bank and its customers.
Our school’s research also suggests a new priority for regulators. Investors at least sometimes reward ethical firms and punish unethical ones. So let’s require companies to report more about their ethics-related practices. How are they treating employees, suppliers and customers? Are they respecting their communities and the environment?
Publicize these business practices, whether admirable or deplorable. Shine a little light into the corporate shadows. More information will enable investors and other stakeholders to make informed choices, and express those choices in terms that businesses understand: money.
Pickett’s Charge: What modern mathematics teaches us about Civil War battle
The Battle of Gettysburg was a turning point in the American Civil War, and Gen. George Pickett’s infantry charge on July 3, 1863, was the battle’s climax. Had the Confederate Army won, it could have continued its invasion of Union territory. Instead, the charge was repelled with heavy losses. This forced the Confederates to retreat south and end their summer campaign.
Pickett’s Charge consequently became known as the Confederate “high water mark.” Countless books and movies tell its story. Tourists visit the battlefield, re-enactors refight the battle and Civil War roundtable groups discuss it. It still reverberates in ongoing American controversies over leaders statues, Confederate flags and civil rights.
Why did the charge fail? Could it have worked if the commanders had made different decisions? Did the Confederate infantry pull back too soon? Should Gen. Robert E. Lee have put more soldiers into the charge? What if his staff had supplied more ammunition for the preceding artillery barrage? Was Gen. George Meade overly cautious in deploying his Union Army?
Politicians and generals began debating those questions as soon as the battle ended. Historians and history buffs continue to do so today.
Data from conflict used to build model
That debate was the starting point for research I conducted with military historian Steven Sondergren at Norwich University. (A grant from Fulbright Canada funded my stay at Norwich.) We used computer software to build a mathematical model of the charge. The model estimated the casualties and survivors on each side, given their starting strengths.
We used data from the actual conflict to calibrate the model’s equations. This ensured they initially recreated the historical results. We then adjusted the equations to represent changes in the charge, to see how those affected the outcome. This allowed us to experiment mathematically with several different alternatives.
The first factor we examined was the Confederate retreat. About half the charging infantry had become casualties before the rest pulled back. Should they have kept fighting instead? If they had, our model calculated that they all would have become casualties too. By contrast, the defending Union soldiers would have suffered only slightly higher losses. The charge simply didn’t include enough Confederate soldiers to win. They were wise to retreat when they did.
We next evaluated how many soldiers the Confederate charge would have needed to succeed. Lee put nine infantry brigades, more than 10,000 men, in the charge. He kept five more brigades back in reserve. If he had put most of those reserves into the charge, our model estimated it would have captured the Union position. But then Lee would have had insufficient fresh troops left to take advantage of that success.
Ammunition ran out
We also looked at the Confederate artillery barrage. Contrary to plans, their cannons ran short of ammunition due to a mix-up with their supply wagons. If their generals had better coordinated those supplies, the cannons could have fired twice as much. Our model calculated that this improved barrage would have been like adding one more infantry brigade to the charge. That is, the supply mix-up hurt the Confederate attack, but was not decisive by itself.
Finally, we considered the Union Army. After the battle, critics complained that Meade had focused too much on preparing his defences. This made it harder to launch a counter-attack later. However, our model estimated that if he had put even one less infantry brigade in his defensive line, the Confederate charge probably would have succeeded. This suggests Meade was correct to emphasize his defense.
Pickett’s Charge was not the only controversial part of Gettysburg. Two days earlier, Confederate Gen. Richard Ewell decided against attacking Union soldiers on Culp’s Hill. He instead waited for his infantry and artillery reinforcements. By the time they arrived, however, it was too late to attack the hill.
Was Ewell’s Gettysburg decision actually wise?
Ewell was on the receiving end of a lot of criticism for missing that opportunity. Capturing the hill would have given the Confederates a much stronger position on the battlefield. However, a failed attack could have crippled Ewell’s units. Either result could have altered the rest of the battle.
A study at the U.S. Military Academy used a more complex computer simulation to estimate the outcome if Ewell had attacked. The simulation indicated that an assault using only his existing infantry would have failed with heavy casualties. By contrast, an assault that also included his later-arriving artillery would have succeeded. Thus, Ewell made a wise decision for his situation.
Both of these Gettysburg studies used mathematics and computers to address historical questions. This blend of science and humanities revealed insights that neither specialty could have uncovered on its own.
That interdisciplinary approach is characteristic of “digital humanities” research more broadly. In some of that research, scholars use software to analyze conventional movies and books. Other researchers study digital media, like computer games and web blogs, where the software instead supports the creative process.
Student Grade Expectations Vs. Reality
Two Goodman professors are examining why there’s often a gap between what grades students think they will get, and what they actually get.
Building on previous research, Goodman School of Business professors Michael Armstrong and Herb MacKenzie set out to answer two questions: what factors contribute to the difference between student grade expectations and reality; and what it takes to cause students to change their study habits.
Armstrong first started researching the issue a number of years ago when he realized there were often students telling him they intended to finish a course with high grades even though they had struggled throughout the year leading up to the final exam.
He would often think to himself ‘Well, yes, technically that’s possible if you pull off an 80 per cent on your final, but chances are you won’t be getting that 80 if you’ve been scoring 40 per cent on your quizzes.’
“I know this, but maybe they don’t. Maybe I should tell them, but would that really sink in? Would they really believe it?” Armstrong said.
To help his students get a more realistic view of where they stand and what their academic goals should be, the professor started researching the difference between the grades university students expected to get and what they actually got.
The associate professor of Finance, Operations and Information Systems created a computer program that forecasts the marks students will earn on their final exam based on what they scored in assignments and tests.
Data was collected from 144 Goodman School of Business students and, of these, 29 per cent reported that their forecasted grades were lower than expected, while six per cent said the forecasts were higher than expected.
But Armstrong noticed something rather odd from the results of that study, “A Preliminary Study of Grade Forecasting by Students,” published April 2013 in the journal Decision Sciences: Journal of Innovative Education.
He expected the students who were surprised by the forecasted marks would change their study habits in an effort to get a better grade, but it didn’t happen.
Intrigued, Armstrong and MacKenzie, associate professor of Marketing, International Business and Strategy, decided to take the earlier research one step further. In January their paper, “Influence of anticipated and actual grades on studying intentions,” was published in the International Journal of Management Education.
The researchers had 278 first-year business students fill out two surveys that were more detailed than the survey used in the 2013 study. These surveys included additional questions such as their high school grades, demographics and a personality test that measures how much people feel in control of their lives.
The students’ high school grade average was 83 per cent and they set an average target grade of 77 per cent in the first-year business course they were taking.
By Week 8 of the course, the group’s average was 63 per cent, and it dropped to 61 per cent by the end of the course.
Armstrong says previous education research has shown that students generally tend to overestimate their abilities, especially at the start of a university education.
“We’re naturally optimistic as human beings,” says Armstrong. “We don’t necessarily have feedback that would actually tell us how we performed, so we kind-of say, ‘Well, I don’t know but I think I’m pretty good.’”
As expected, the students said they would increase their studying if their grades were lower than the target grades that they had set for themselves.
But, like in the first study, the grades that Armstrong’s computer program forecasted had no effect on whether or not students would change their study habits.
“We don’t yet know why this is,” he said. “Perhaps they might not believe the forecast or they might think, ‘even if the forecast is reliable for an average student, it’s not reliable for me.’”
Other findings include:
- students with the most unrealistic grade expectations in the beginning of the course still have unrealistic expectations at the end
- confident students — that is, those with higher scores on the personal control test — set higher grade targets but generally don’t achieve these higher targets
- accomplished students — that is, those with higher high school averages — are less likely to overestimate the grades they can reach
Armstrong says he hopes the research results will help guidance counsellors and parents to help high school graduates better prepare for university.