Computer Science: A Misnomer

  • Category: Education
  • Hits: 943

5 Ways to Become More Comfortable With Technology in Your ClassroomBy Raven Jiang

According to figures kept by the university’s Institutional Research, there were 574 declared undergraduate computer science (CS) majors, making CS the largest major by far. The next largest major is human biology with 323 declared majors – just above half the size of CS. This is the result of a half-decade-long trend with CS enrollment growing by around 25 percent every year since 2009.

The question lingers at the back of our minds: Is Stanford University about to become the Stanford Institute of Technology? Looking at the enrollment numbers alone, that seems like a real possibility. Even though Stanford’s admission process is as diverse as it has ever been, many freshman – from prospective pre-meds to math geeks to public policy enthusiasts – seem to end up in CS by the end of their sophomore year.

But enrollment numbers do not tell the whole story. Behind the monolithic number that tells us how many students are declared CS is an extremely broad field of study that encompasses a huge variety of topics.

Any CS major would attest that there exist a few distinct subsets of CS that have little overlaps with one another beyond the shared core classes. A student studying human-computer interaction is almost guaranteed to never be in the same class as their friend studying CS theory after taking introductory system and algorithm classes. The only reason why they are nevertheless both CS majors is because we continue to see everything to do with computing as a single field of study.

It is time to abandon that outdated classification.

If we trace the roots of political science, economics, chemistry and even computer science itself, we find that they once all fell under the purview of philosophy. As these disciplines became rich and deep enough to warrant their own specialized niches, they discarded the broad and undescriptive “philosophy” label when the time was right.

Today, the exact opposite is happening with CS and it makes little sense. Topics like network design and statistics that have existed independently long before the digital age are being rebranded and subsumed under CS. Business analysts and sociologists are both calling themselves “data scientists” on LinkedIn and consumer service companies like Uber and Airbnb are branding themselves as technology companies simply because they use computers to solve problems, even though we do not consider farmers to be technologists simply because they employ automated machines or predictive weather models.

Imagine if people majored in mathematics to learn to run a company, or trade stocks, or develop iPhone apps or sequence genes. That is an absurd situation, even if mathematical principles are essential to all those tasks. Yet, that is essentially what many Stanford CS students are doing in droves. As computing and coding as a whole are becoming indispensable tools for those who seek knowledge in other fields, CS appears to have become the learn-to-do-anything-and-everything major, even if most people really only want to learn software development.

In light of this, CS ought be split up into four separate majors that better align with the depth and focus of the actual learning material: Computer Science for computing theory, Applied Computing for software engineering theory and practice, Human-Computer Interaction (HCI) for UI design and graphics and Artificial Intelligence (AI) for statistical techniques and data processing. These majors will continue to share the core introductory programming and theory classes, just as chemistry and biology share the same mathematics and physics classes. The immediate administrative impact of doing so is manageable because the numerous concentrations in CS already function almost like distinct majors.

The advantage of this change in the longer term is to give each of the distinct majors more flexibility in catering to their students’ needs in terms of curriculum and staffing, bringing themselves closer to departments outside of CS that are more naturally complementary. For example, HCI has more in common with art, product design and communications than it does with AI, and so should be encouraged to reach out and become an integral part of those majors instead of bringing topics from those existing fields and absorbing them into the complex amorphous blob that is CS today. Similarly, CS theory has more in common with mathematics and AI with statistics.

The rule of thumb governing all of computer science and engineering is building good abstractions and separation of concerns. In economics, the same principle governs specialization and trade. The CS major as it exists today is a poor abstraction framework consisting of dramatically different fields of studies whose only shared feature is that they all leverage computing. As everything from sociology to biology becomes dependent on digital technology to achieve progress, it is about time we separate coding as a tool for every discipline from computer science as a distinct field of study, just as mathematics and philosophy have done in the past.

Contact Raven Jiang at jcx ‘at’ stanford.edu.

Article source: Stanford Daily

What Are MOOCs Good For?

  • Category: Education
  • Hits: 318

moocmania.600By Justin Pope

“We’re nearing the point where it’s a superior educational experience, as far as the lectures are concerned, to engage with them online,” says a Harvard professor. If that’s true, traditional universities will have to show that most of the other things they offer on campus can’t be replaced by technology."

A few years ago, the most enthusiastic advocates of MOOCs believed that these “massive open online courses” stood poised to overturn the century-old model of higher education. Their interactive technology promised to deliver top-tier teaching from institutions like Harvard, Stanford, and MIT, not just to a few hundred students in a lecture hall on ivy-draped campuses, but free via the Internet to thousands or even millions around the world. At long last, there appeared to be a solution to the problem of “scaling up” higher education: if it were delivered more efficiently, the relentless cost increases might finally be rolled back. Some wondered whether MOOCs would merely transform the existing system or blow it up entirely. Computer scientist Sebastian Thrun, cofounder of the MOOC provider Udacity, predicted that in 50 years, 10 institutions would be responsible for delivering higher education.

Then came the backlash. A high-­profile experiment to use MOOCs at San Jose State University foundered. Faculty there and at other institutions rushing to incorporate MOOCs began pushing back, rejecting the notion that online courses could replace the nuanced work of professors in classrooms. The tiny completion rates for most MOOCs drew increasing attention. Thrun himself became disillusioned, and he lowered Udacity’s ambitions from educating the masses to providing corporate training.

But all the while, a great age of experimentation has been developing. Although some on-campus trials have gone nowhere, others have shown modest success (including a later iteration at San Jose State). In 2013, Georgia Tech announced a first-of-its-kind all-MOOC master’s program in computer science that, at $6,600, would cost just a fraction as much as its on-campus counterpart. About 1,400 students have enrolled. It’s not clear how well such programs can be replicated in other fields, or whether the job market will reward graduates with this particular Georgia Tech degree. But the program offers evidence that MOOCs can expand access and reduce costs in some corners of higher education.

Meanwhile, options for online courses continue to multiply, especially for curious people who aren’t necessarily seeking a credential. For-profit Coursera and edX, the nonprofit consortium led by Harvard and MIT, are up to nearly 13 million users and more than 1,200 courses between them. Khan Academy, which began as a series of YouTube videos, is making online instruction a more widely used tool in classrooms around the world.

All this activity is beginning to generate interesting data about what MOOCs actually do. In September, MIT physicist David Pritchard and other researchers published a study of Mechanics ReView, an online course he teaches that is based on an on-campus course of the same name. The authors found that the MOOC was generally effective at communicating difficult material—Newtonian mechanics—even to students who weren’t MIT caliber. In fact, the students who started the online course knowing the least about physics showed the same relative improvement on tests as much stronger students. “They may have started with an F and finished with an F,” Pritchard says, “but they rose with the whole class.”

For some people, especially adults in search of continuing education, even dropping out of a MOOC may well be a kind of victory—over an old model of credit-hours and semester-long courses that makes no sense for them. If they want to see whether they’d be interested in a topic, or just want snippets of material, why should they pay for, and sit through, an entire 12-week syllabus?

For all the hype, MOOCs are really just content—the latest iteration of the textbook. And just like a book on a library shelf, they can be useful to a curious passerby thumbing through a few pages—or they can be the centerpiece to a well-taught course. On their own, MOOCs are hardly more likely than textbooks to re-create a quality college education in all its dimensions.

Justifying tuition

When Harvard and MIT announced the creation of edX, they said a major goal was to jump-start innovative teaching to their own students. That got little attention, at least beyond Cambridge, but there are signs it is happening. Many of the technologies central to MOOCs, built around interactivity and assessment, can be useful tools for students on campus, says MIT’s director of digital learning, Sanjay Sarma. MIT students can’t get credit for taking even MIT-produced MOOCs, but they still use MOOC tools in their courses. Two-thirds have taken a traditional course that uses the edX software platform.

Down Massachusetts Avenue, Harvard computer scientist David Malan says his campus has also seen “a marked uptick” in conversations about reinventing teaching. Malan’s Introduction to Computer Science course captures many of these currents. The on-campus version is Harvard’s most popular, with around 800 students. The MOOC version has about 350,000 registrants from around the world, ranging from preteens to 80-year-olds. Both versions use sophisticated, overlapping learning resources, from lecture videos to assessments. Their academic standards are the same.

Malan began videotaping lectures in 1999, but he says the tools of the MOOC bring a new dimension to his teaching. For example, lectures that typically take an entire class period can be broken up online into shorter, more focused units, allowing students to spend as much time on each segment as they need.

The paying Harvard students decide for themselves whether to attend the lectures or just catch them online. “I would like to think there’s a nontrivial psychological upside to the shared experience,” he says, but it’s up to them. Instead of necessarily having all 800 students attend each lecture, “I would rather have 400 students who want to be there,” he adds. Besides, “we’re nearing the point where it’s a superior educational experience, as far as the lectures are concerned, to engage with them online.”

If that’s true, it’s a terrifying but useful prod for traditional universities. At MIT, the edX experiment has been “a huge stimulus,” says Pritchard. Across higher education, “it’s making everybody sit up and answer the following question: ‘How can I justify charging students $45,000 a year to attend large lectures when they can find better exemplars on the Internet?’”

In Malan’s course at Harvard (where tuition, fees, room, and board actually run $58,607 this year), part of the answer is that even if the academic standard is identical, the full experience is not. The Harvard students get course sections and recitations with just a few students, a 90-minute weekly recap of the material, and office hours four nights a week (the class essentially takes over a dining hall). The on-campus course is almost cinematic in its production scale, with a staff of 100. To assist orders of magnitude more students in the MOOC, five staff members wade into discussion forums, along with student and alumni ­volunteers.

And of course, students not just at Harvard but at hundreds of other universities get much more than that. They get a credential that is necessary for many types of employment, plus access to alumni networks and mentorship. That’s why MOOCs shouldn’t necessarily threaten colleges: if established institutions make judicious use of learning technology where it demonstrably helps students, they gain credibility to insist that most of what else they offer on campus is a qualitatively different experience—one that technology can’t replace.

Teaching teachers

Education researchers are still just beginning to mine all the data that MOOCs generate about how students respond to the material. Researchers like Pritchard can track every step of every student through a MOOC; he says that for him to study his traditional students that way, “they’d have to carry a head-cam 24-7.” Eventually, such data should yield insights about the best ways to present, sequence, and assess particular subjects. Kevin Carey, who has researched MOOCs as director of education policy at the New America Foundation, points out that today’s MOOCs haven’t even begun to make serious use of artificial intelligence to personalize courses according to each student’s strengths and weaknesses (a surprise considering that pioneers like Thrun and Coursera’s Daphne Koller came from AI backgrounds).

Yet while MOOCs’ huge enrollments are fantastic for running educational experiments, it makes them hard to teach. Pritchard’s MOOC represents a much wider range of abilities than his on-campus class at MIT. “It’s like we’re trying to teach from second grade up to seventh,” he says. His new project is an Advanced Placement physics course for high school students. By narrowing the target audience—high school students who believe they’re ready to take AP physics are likely to start within a fairly tight band of knowledge—he thinks he can teach more effectively than would be possible in a more diverse MOOC.

Indeed, for all the focus on the role of MOOCs in higher education, they might have a significant role to play in high schools and below. Teachers are already a big audience (a study of 11 MOOCs offered by MIT last spring found that nearly 28 percent of enrollees were former or active teachers). This is particularly promising because teachers pass what they learn on to their own students: when they make use of edX and other resources in their classrooms, they multiply the effect. As Coursera moves explicitly into teacher training, its classes could have as much impact by reaching a few hundred teachers as they would with thousands of other students.

MOOCs alone can’t meet the oversized expectations of early boosters like Thrun—who themselves echoed would-be reformers over the decades who looked to radio, television, and the mail to democratize learning (see “The Crisis in Higher Education”). For better or worse, traditional methods of higher education showed remarkable persistence as those models emerged. Yes, this time might be different. But if MOOCs do prove revolutionary, it will be because educational institutions have finally figured out how to use them.

Justin Pope, a former higher-education reporter for the Associated Press, is chief of staff at Longwood University in Virginia.

Conference Time

  • Category: Education
  • Hits: 326

images
By Gwendolyn Beetham

I'm writing this reflection on the train home from the airport, having just completed four days of conferencing at the National Women's Studies Association's Annual Conference.
As I mentioned in my initial post on the conference, the schedule was packed, which is certainty not unusual for a conference of this type. Rushing from this panel to that keynote, this meeting to that working lunch, I began to wonder when exactly I would have the time to reflect on my participation enough to get some coherent thoughts out online. (Since you're reading this after the conference has closed, I think you know the answer here.)

These concerns got me thinking about larger, ongoing discussions about time in academia, and in contemporary culture more broadly, and specifically on the "disease of being busy."
This topic was mentioned in our most recent #femlead discussion on Twitter. In a day packed full of meetings, email, and yet more meetings, chat participants wondered, where is the time for reflection, time to let ideas meander and take shape? Where is the time, as Mimi Nguyen asks, to be "wholly unproductive"?

This also ties in - here we go! - to discussions taking place at #NWSA2014. For example, in a roundtable discussion amongst Feministing.com's leadership that I moderated, concerns were raised about the "fastness" of online culture in compared to the "slowness" of academe. Those of you who know my work here at University of Venus know that I am a huge fan of public intellectual work. However, I am also skeptical of the ways that some discussions play out online, especially on venues like Twitter and publications which use algorithms to determine content based on ability to "break the internet," rather than the strength of the content itself. In other words, while I definitely see the benefits of the fast pace of working online, I also see the benefits of the "slow knowledge" of the academy. As Janet Jakobsen recently noted: "By taking one’s time, one can resist both producing too quickly in order to meet the professional managerial imperative to be always “busy, busy, busy,” and also too quickly consuming knowledge that would be better understood were there time to digest it."
Though it is true that conferences (especially when they are in picturesque locations like San Juan!), can offer pockets of slowness and opportunities for less productivity, I don't want to suggest that they can - or should be - unproductive. It is always refreshing to hear new ideas, and grounding to connect with colleagues and old friends. But I don't feel that I had sufficient time to reflect - to let my thoughts fully percolate - after the last whirlwind few days.

I hope that this has given some food for thought, and that you bear with me as I try to digest enough to get my thoughts out onto the page. In the meantime, tell me: how do you experience conference time?

Article source: Inside Higher Ed, Article first published on November 16, 2014

Peek into the Evolution of Medical School Education

  • Category: Education
  • Hits: 276

med schoolBy Kathleen Franco, M.D.

​Changes in medical school haven't just been a result of new scientific discoveries. What many prospective medical students don't know is that their education has evolved greatly – and continues to do so.

Abraham Flexner published what's now known as the Flexner report in 1910. It urged U.S. medical schools to adopt a variety of standards from admissions to curriculum, as there were previously no set rules for medical schools.

In fact, many schools were created simply to make money. The schools ignored basic science, and students spent little or no time in a scientific laboratory. Students of color attended separate schools, as did women. While Flexner visited 155 U.S. medical schools, few met his expectations. Many closed in the aftermath.

Multiple changes occurred in the years that followed. Students wanting to enter medical school had to have at least two years of college and a knowledge of biology, chemistry and physics. Medical schools increased their lectures to cover much more material and extended the length of training to four years.

Clinical rotations in multiple areas gained favor and were considered critical undertakings before gaining a medical degree. Internships, and later residencies, became the rule rather than the exception. Board certification became common and has moved from lifelong certification to an ongoing process that requires much more than sitting through classes.

In 1942, the Liaison Commission for Medical Education was established, and it still makes on-site reviews, assuring that every medical student is being properly trained.

Let’s step in to the medical school of today. Although some schools still rely on a lecture format, others have moved to interactive teaching by using problem-based learning, team-based learning, flipped classrooms and other methods. Most schools offer a clinical experience to students before their third-year clerkships. These patient care opportunities are frequently incorporated into other parts of the curriculum.

Before embarking on a clinical experience, students often participate in small group learning about how to take a medical history and perform a physical examination. Standardized patients, who are trained actors, role-play with faculty and students. Students are then assigned a preceptor and try taking a history and completing a physical exam on a real patient in the preceptor’s practice.

Diverse students from various backgrounds enrich medical education by offering different experiences, perspectives and opinions. Students who are exposed to greater diversity in their classmates report feeling better prepared to work with a greater range of patients.

Humanities plays a larger role in today's medical education curriculum, not only during medical training but also in the MCAT. Having proficiency in the sciences alone does not guarantee a student will be a good communicator or easily build trusting relationships with patients.

Technology, as it has in so many fields, has moved medical education ahead. Anatomy training may now include virtual learning with holograms, while communication may be enhanced through avatars with faculty-inspired responses.

Some schools use iPads in the wards. Texting and tweeting may replace email communication with faculty. Small groups may meet online instead of in the classroom. New tools and methods to evaluate internal conditions are replacing invasive, costly and painful procedures.

Everywhere I look, I see exciting innovations popping up in medical education, heralding a new era of rich, experiential learning.​ Flexner would be in awe.

Article source: US News.

Conference & Events

GCC & MENA

HigherEd News

User Access

About Us

Follow Us

×

Sign up to keep in touch!

Be the first to hear about latest News in higher education from ArabiaHighered.

Check out our Privacy Policy & Terms of use
You can unsubscribe from email list at any time