Sunday, July 16, 2017

Keeping tabs on cheating

I tend to keep tabs open in my browser for weeks with interesting articles I want to explore in more depth. Then Firefox decides to update and crashes so miserably, that the tabs are gone. So I'll try to at least post them here. No promises that I can do this with any kind of regularity, like Retraction Watch does with its Weekend Reads.
  • The Japan Times has an interesting article debunking an excuse typically used by students from the Far East: "Confcius made me do it." It seems that the difference between allusion and "literary theft" was well know many centuries ago.

    "If East Asian students and researchers plagiarize, it’s not because of some archaic cultural programming; it’s because modern institutional cultures tacitly condone plagiarism, or lack clear policies for explaining and combating it."
  • In the New Scientist there was an interview with Shi-min Fang that published in 2012, who was awarded the Maddox prize for his work on exposing scientific misconduct in China.  It seems that there is a lot of controversy around his work.
  • At the University College Cork in Ireland there was a spat about wide-spread contract cheating, as the Irish Times reports. Ireland is currently considering legislation to make advertising for or providing contract cheating services illegal.
  • Down under, the weekly student newspaper of the University of Sydney, Australia,  Honi Soit reports that the university had considered using some anti-cheating software that was created by former University of Melbourne students, but have decided not to after a trial. The idea was to analyse typing patterns and use multiple login questions in order to make it harder for students to submit essays purchased from contract cheating sites. Some of the issues included the necessity to be connected to the Internet to write an essay, forcing students to write with this system and not the editor of their choice, and a massive invasion of privacy that includes tracking the locations of the users and comparing it with the location of their mobile phones. The software was felt to be impractical and invasive.
  • Back in June the Daily Times reported that the doctorate of the prorector of the Comsats Institute of Information Technology has been revoked by Preston University.
  • The former head of the Toronto school board lost his teaching certificate for plagiarism. According to The Globe and Mail, he has appealed the ruling and is willing to testify under oath about who helped him produce the plagiarisms.

Sunday, July 9, 2017

German plagiarism cases in the news

There were four articles in German news this past week or so about a very diverse collection of plagiarism cases. Here are the links and short summaries in English:
  1. The taz published an article by Markus Roth about a biography that Stefan Aust, a well-known German writer, published in 2016, „Hitlers erster Feind. Der Kampf des Konrad Heiden“ (Hitler's first enemy - Konrad Heiden's struggle). Heiden, a writer in exile in France, had published a biography of Hitler in the mid-30s. It seems, however, that Aust liberally used text from Heiden himself, just changing the present tense to the simple past tense or adding an explanation of names that would be clear to someone reading in the 30s but not to present day readers. Some examples are given in the taz article.  Aust himself had apparently recently complained that people were looting Heiden's words, but stated that he was setting a monument to Heiden's works. Wer erzählt hier eigentlich?“ (who is speaking here) is apparently a question difficult to answer, unless one has read much of Heiden's work, as Roth has done (he is also working on a biography of Heiden).
  2. Stern reports on a Facebook posting by German folk music star Stefanie Hertel against hate on the historic occasion of Germany passing legislation permitting homosexual couples to marry. Her fans praised her words, but it turned out they weren't acutally hers, but from a TV game show moderator, Michael Thürnau. Ich fand seine Worte so toll, dass ich ihm einfach nur recht geben konnte, she defended herself according to Stern, "I found his words so awesome, that I just had to say that he's right." 
  3. The DFG, the German funding organization for research, announced that they were reprimanding a scientist "in writing". A life scientist (no name or research institut mentioned) was found to have had extensive word-for-word copies from other publications without reference in a grant application. The DFG investigated, and the scientist conceded that s/he had copied more for the "state-of-the-art" section.
    Since I don't know what a "reprimand in writing" means, I have written to the DFG to ask for clarification. 
  4. In other DFG news, a Leibnitz prize (2.5 million €) was awarded to a researcher after all. Just prior to the award ceremony in March 2017, plagiarism allegations arose. The DFG postponed the award in order to investigate. They are satisfied that there was no plagiarism, and thus have now given out the award. The allegations were not made public.
Update: Marco Finetti, the spokesman for the DFG, clarified for me: A reprimand "in writing" is indeed just a letter written to the scientist. But since it has been decided on by the Hauptausschuss, the main body of the DFG, all the scientists in that board and the representatives of the state governments (who finance the universities in Germany) heard the details of the case and decided on this as the weakest sanction. "It is a big blow to the reputation of a scientist", Finetti claimed.

Friday, June 2, 2017

WCRI 2017, Day 4

Day 3

I really wanted to hear John Ioannidis (Stanford University, Stanford, U.S.A.) speak in the morning about "Re-analysis and replication practices in reproducible research," but I was so tired that I didn't make it until later. I did have time, though, to speak with Skip Garner. I learned that eTBLAST, the text comparison and search tool that populated the Déjà vu database from MEDLINE, was turned off when he left his previous school. But there is a follow-on project, HelioBLAST. More on this later.

Ana Marušíc led the session on Retractions that included a very curious case. There were three talks about this case, it was a shame that they could not have given one talk three times longer (and without two different topics in between).

Alison Avenell (University of Aberdeen, UK) gave the first two talks. She spoke first about "Novel statistical investigation methods examining data integrity for 33 randomized controlled trials in 18 journals from one research group." While preparing a Cochrane study she and her colleages noted a rather odd set of studies by the same Japanese authors that managed to recruit and interview 500 women with Alzheimer's and 280 males and 374 females with stroke in just a few months, interviewing the participants every four weeks over a five-month period. And the studies all had the same results, although the patients were supposedly different.

Doing some statistics on the values reported showed it highly unlikely that the data was not fabricated. They wrote to the authors and quickly received a reply that this was an error, they would correct it. Instead of a retraction, however, there was only a correction published.

By now Alison's group was looking at the 33 other RCT studies from the group that they could find. They were published in 16 journals (the highest one was the JAMA) over 15 years with a total of 26 co-authors at 12 institutions. The group tried to see what the impact of these papers was, that is, in how many reviews this incorrect data was currently used. They found 12 of the studies in secondary publications, one guideline, 8 trials that used these results as the basis for their research rationale involving over 5000 people! That means that even with conservative costs of $500 / person / study, $2.5 million were spent, thinking that they were expanding on solid research. And since they were only looking at English-language publications, the impact was probably even wider.

In all, it took three years to get the JAMA paper retracted. Someone from the audience noted that it is difficult to get journals to retract papers anyway, mostly for legal reasons. Andrew Grey (University of Auckland, New Zealand) reported on the problems they had getting any of the papers retracted and their own paper about the case published (Neurology. 2016 Dec 6;87(23):2391-2402. Epub 2016 Nov 9). He used a timeline that got more and more complicated as time passed as they kept writing back to unresponsive journals. He identified some interesting issues:
  • How should journals deal with meta-reviews that are based on retracted work?
  • Should journals be more forthcoming in the face of unresolved concerns? If it takes 3 years to retract an article, there will be many people who read the paper and perhaps acted on it.
  • Should published correspondence about retracted papers also be retracted?
  • They also emailed medical societies and institutions at which the authors worked - should they have done this?
One of the other talks was by Marion Schmidt (DZHW, German Centre for Higher Education Research and Science Studies, Berlin, Germany) about an analysis she did on the annotation of retractions in PubMed and the Web of Science. She first determined that the word "retraction" is defined differently by various organizations. She noted that many retractions-based studies are often based on selecting papers marked "Retracted Publication" as the type of the article in PubMed. She conducted a title-based search in PubMed and on the Web of Science using "Withdrawal" in the title and the type of article marked as retracted, then validated manually. Surprise! There are retractions in PubMed that are not listed on the WoS, and vice versa. And not all withdrawals are marked at all. Sometimes a withdrawal in one database is marked as retracted in the other one. She concluded that the formats used by publishers do not translate loss-free into different databases and wonders how citing authors can even be aware of a retraction if even PubMed and WoS do not agree. Even if there were a database of retractions (the audience noted: Retraction Watch!), people would have to check all their references against it.

The other talk in the session was by Noemi Aubert Bonn (Hasselt University, Diepenbeek, Belgium). For some reason, it was in the retractions session, although it was not about retractions but is research about research integrity: How is it performed, how is it published, what are the consequenses?

In a plenary session about the Harmonization of RI Initiatives, Maura Hiney from the Health Research Board Ireland (HRB) and the lead author on the ALLEA European Code of Conduct for Research Integrity (2017) charted the development that has been made at WCRI: At the first conference people were discussing whether or not there really was a research integrity problem. Later conferences grappled with defining it, finding methods to investigate it, defining who is responsible for it, and now that there are so many different definitions and methods and policies, how can they be harmonized? Simon Godecharle had presented various maps at the 2013 WCRI showing the wide variations that exist in Europe alone, starting with language. At least by 2017 there are now less countries that have no policy at all.

Daniel Barr from Deakin University, Australia, spoke on the "Positive Impacts of Small Research Integrity Networks in Asia and Asia-Pacific," recurring on the Singapore Statement and noting that RIOs, research integrity officers, are quickly becoming the norm at universities.

Alison Lerner from the National Science Foundation, U.S.A. spoke about the NSF's role in "Promoting Research Integrity in the United States." She spoke of their processes of auditing and investigating cases of fraud, and noted that they have had some extensive plagiarism cases, some of which also involved fraud. Both PubPeer and Retraction Watch
were given a shout-out as non-governmental bodies that work on monitoring integrity.

I then did some more session hopping, as the interesting talks were in different rooms.

Skip Garner talked about finding potential grant double-dippers, it is a similar process to finding duplicate abstracts in MEDLINE or duplicate abstracts for papers given at different conferences (or the same conference in different years).   He spoke a bit about Déjà vu and how eventually many of the duplicates he uncovered were retracted. But the rate of retractions is lower than the rate of new questionable manuscripts in the scientific corpus, which is worrying. Even two years after a retraction, 20 % of them are not tagged as such, and thus people continue to use them.

For fun (yes, computer people have perhaps different ideas of "fun" than other folks) he downloaded the abstracts from scientific meetings that had more than 5000 abstracts each and permitted a longitudinal investigation because the meeting recurrs yearly or every other year. These were compared to each of the other abstracts at the meeting itself, with all abtracts in the previous meetings, and with his collection of Medline abstracts.

He encountered multiple submissions, replicate abstracts with different presenting authors, replicate abstracts from previous years, and plagiarized abstracts. He assured the audience that he did not run this meeting :)

His double-dipping work has been published (Double dipping, same work, twice the money) and reported on (Funding agencies urged to check for duplicate grants) in Nature in 2013. [Drat, I should have downloaded the first one when I was in Amsterdam. The VU has full access to Nature, my school doesn't. Of course, I could buy it for $18....].

During Q&A he was asked if he had reported the cases he found. Indeed he did, and the journals didn't like it. Seems the US government also subpoenaed his database...

Miguel Roig (St. John’s University, NY, U.S.A.) spoke about Editorial expressions of concern (EoC). He and some of the Retraction Watch crew pulled EoCs out of PubMed and examined them. They looked at the wording of the EoC and the eventual fate of the paper. Only 7 % resulted in a correction, 32 % resulted in a retraction, in 4 % of the cases the matter was resolved. For the rest (almost 58 %!!!)  there was no follow-up information to be found, even if the EoC was published four years previously. He referred to a very recent publication (Feb. 2017) on the same topic, Melissa Vaught, Diana C. Jourdan & Hilda Bastian, "Concern noted: a descriptive study of editorial expressions of concern in PubMed and PubMed Central" in  Research Integrity and Peer Review 2017 2:10. He closed encouraging journals to be more specific about the reason for the concern and to use EoCs more often.

Mario Malicki (University of Split School of Medicine, Split, Croatia) spoke about his "hobby project" (i.e. no funding) looking at third party inquiries of possible duplicate
publications. He discovered that the National Library of Medicine will assign a tag of "duplicate publication" in the [pt] field if it finds a pair during manual indexing. But there is no action taken, and since the mark is hard to find, people don't see them. He downloaded 555 potential duplicate publications, and checked to see if they had been retracted. He contacted 250 editors about the duplicates, although 16 editor emails could not be located at all. Not all editors bothers to answer his inquiry, although a few of these were eventually retracted. The correspondence with the editors was evaluated, as there were specific questions asked, such as: are you aware of the duplicate publication field tagging in Medline? Only 1 was aware of this, 15 said no, 165 did not bother to answer the additional question!

Mario catalogued the answers and the reasons given for not taking action, and as far as he obtained information, the excuses of the authors and above all of the publishers for their errors. It seems common that an article is published twice in different volumes (104 times), or doubly published in a sister journal (64 times) or even published twice in the same volume (21 times). Over the span of 4 years, 9 % the articles identified have been retracted. He did not determine the publishers or the precedence of the publications.

J.M. Wicherts (Tilburg University, Tilburg, The Netherlands) has a theory, namely that transparency and integrity of peer review are somehow linked. In order to show this, he set up QOAM: Quality of Open Access Market. Here the readers rate a journal on various paramters on a scale of 1 to 5. Since it was not made clear which is best (1 is the top grade in Germany), this has a cross-cultural issue. To date about 5000 ratings have gone in, there is one particularly active person. He saw this positively, I would check to make sure it is not someone hired by certain journals. As a quick test, I chose my favorite rabid anti-vaxxer paper published in a journal that was on the now-defunct B-list. Sure enough, it was in there, with three reviews and a grade of 4.6. I don't really believe that this is a good idea.

At the closing session Nick Stenek presented the Amsterdam Agenda for assessing the effectiveness of what are seen as the most important ways to promote integrity in research that had been worked on over the past days.

It was quite an experience, these two conferences in Brno and in Amsterdam. They were different in focus, but both offered much for me to learn. And it was fantastic to meet all these people I have corresponded with by email in person! The next WCRI will be in 2019 in Hong Kong, jointly organized by people in Hong Kong and Melbourne.

I have one other link that I picked up from a tweet I want to preserve here: The Authorship and Publication subway map from QUT Library and Office of Research Ethics & Integrity.

Over and out!

Thursday, June 1, 2017

WCRI 2017, Day 3

Day 2                                                                                                             Day 4
Day 3 of the World Conference on Research Integrity began with a plenary session on the role institutions play in research integrity. Bertil Anderson, the president of the Nanyang Technological University Singapore, spoke on  "Research and Research integrity - a key priority for a young and fast rising university." He reported in a refreshingly open manner about quite a number of cases of academic integrity his university was concerned with. In addition to much plagiarism, authorship disputes, self-plagiarism, there was outright fraud. He asked how much a university can do to investigate a case when something has happened that also hits the media? He presented four cases:
  1. NTU retracts NIE academic papers after malpractice investigations (The Straits Times), 2016
    A professor, hired in 2006, was contractually obligated to publish 10 papers in 3 years. An external whistleblower alerted the university to data fabrication that included an invented person and an invented company, eventually the police and the Ministry of Education were involved. The case led to 21 retractions. Such a clause is no longer in contracts.
  2. 3 Singapore-based scientists linked to research fraud (The Straits Times), 2016
    NTU professor fired for data falsification (New Press), 2016
    This case involved Western blot imagery manipulation and three institutions in Singapore, the USA and New Zealand. Two PhDs were revoked (and this has to be done by the president of the country, not the university president), nine papers were retracted and the professor dismissed for willful negligence. The national research organization is also seeking repayment. As a problematic side effect, students of this professor are now left without a supervisor, and are often not accepted on joint programmes elsewhere because of the tarnished reputation of laboratory. They are innocent persons who suffer.
  3. Adventures in Copyright Violation: The Curious Case of Utopian Constructions (Blog, Lincoln Cushing), 2012
    This was a case of images being mis-used. The owner of the copyright on the pictures stumbled over his pictures and wrote to the full professor at the Arts School who was using them. All he wanted was his name on the images and a link to his site. The professor refused, a lengthy investigation including external people ensued (that also uncovered additional problems) and ended with the professor being dismissed. 
  4. Fake Peer Reviews, the Latest Form of Scientific Fraud, Fool Journals (Chronicle of Higher Education, paywalled), 2012
    A scientist managed to hack into the Elsevier system for referee reports. He added sentences along the lines of "paper X needs to be referenced" in order to increase his own citation index. It turned out that there were 122 instances of such hackings. The scientist resigned, but NTU referred the case to the Singapore Police Force under the "Misuse of Computers" legislation, and the scientist has apparently left the county.
Anderson concluded by discussing the challenges in developing a culture of research integrity in a rapidly developing university in a competitive environment. There are aggrevating factors such as hierarchy, hetrogeneous faculty, tolerance of misconduct, and also the fact that investigations of research integrity cases need competencies outside the traditional university framework, such als lawyers. He emphasized that no university or institution can be immune to research fraud, and thus they need to have clear procedures defined.

The second speaker in the plenary session was Mai Har Sham, the associate vice president of research at The University of Hong Kong. She noted that just recently there was a meeting of the Asia-Pacific Research Integrity Network with 110 participants from 20 countries. She specified three areas in which the institution is called on to take action:
  1. Determination and commitment - provide policies, resources and infrastructure support
  2. RCR education, skills training, setting up a data management system andsupporting platforms
  3. Taking the initiative for quality assurance and risk managment
The third speaker was Jay Walsh, the vice president for research at Northwestern University, USA. He noted wryly that if you have never been quoted out of context, you haven't been quoted enough. How true. He then embarked on a short description of how we learn: We gather evidence and we form stories.We then develop a hypothesis about how the words in the stories work. We take data, distill it into information, coalesce that to knowledge, from which we develop wisdom.

But things can go wrong that warp the stories, the data, the information, the knowledge and/or the wisdom. Beyond inadequate methods, poor data, and poor practices, he referred to a paper that recently identified 235 forms of bias.

He feels that the path forward involves the training of researchers in the responsible conduct of research (RCR). Since it is hard to change the curriculum, the funders should make RCR training a requirement. Speaking of funders, he desires a single system for handling FFP  (falsification, fabrication & plagiarism) cases, as each funder has a different process. There need to be robust RCR courses, and professors could be given credit for teaching such courses. It is also vital to have a system that allows students and post-docs to come forward with problems without retribution, although this is so easy to say and so hard to do. The root causes of research integrity issues are wrong incentives. These need to be solved, otherwise we are just treating the symptoms.

I then chaired a session on Authorship. This seemed to me to be such a trivial topic, as the focus of publication is on communication from a group of authors with a collective of readers, so I find a ranking to be unnecessary. But there are many and various forms of author orderings and inclusions and perceptions of what they all mean. And where there are differences of opinion, there are fights, somtimes quite intense and protracted. It was interesting to see people investigating this from all sides, I'll just list them here, as I was busy dealing with the time and the questions during the session:
  • Authors without borders: Investigating international authorship norms
    among scientists & engineers
    Dana Plemmons (University of California, Riverside, U.S.A.)
  • Experiences of the handling of authorship issues among recent doctors
    in medicine in Sweden
    Gert Helgesson (Karolinska Institutet, Stockholm, Sweden)
  • A philosophical framework for a morally legitimate definition of
    scientific authorship
    Mohammed Hosseini (Utrecht University, Utrecht, The Netherlands)
  • The perceptions of researchers working in multidisciplinary teams on
    authorship and publication ethics
    Zubin Master (Albany Medical College, Albany, NY, U.S.A.
  • An investigation of researchers’ understanding and experience of
    scientific authorship in South Africa
    Lyn Horn (University of Cape Town, Cape Town, Republic of South Africa)
The final plenary session was about interventions that work. The session chair, Lex Bouter, remarked that the only thing that people have widely learned to use is text-matching software, all other investigations of interventions have either shown no effect, or an effect in the wrong direction.

The first speaker was Klaas Sijtsma, who was vice-dean at the Tilburg University School of Social and Behavioral Sciences under the deanship of Diederik Stapel when Stapel confessed to the rector that he had, indeed, committed extensive academic misconduct. Sijtsma was first named interim dean, then continued on as dean and will be stepping down in the coming school year. In his talk "Never Waste a Good Crisis: Towards Responsible Data Management," he spoke about this scandal that also touched the University of Amsterdam and the University of Groningen and involved dozens of articles and book chapters, and affected several PhD theses that were granted on the basis of analysis of fraudulent data.

They were lucky to have had a confession, so that Stapel's contract could be terminated, although there were still many committees needed to investigate all of the publications. There were a few criteria that were identified that permitted a culture to thrive in which the frauds were possible: Staple was unusual in that he preferred to work alone, he would not allow his PhD students to collect their own data, and he presented unlikely results to journals.

The University Tilburg is taking the following steps to foster a climate of integrity:
  • Each PhD student must have at least 2 supervisors.
  • Master theses and PhD theses are scanned for plagiarism (although they did this already).
  • An official formula is read aloud publicly when a doctorate is awarded. It refers to the young doctor's obligation to academia and society to act with integrity.
  • The university has a Code of Conduct.
  • Every staff member must sign an integrity code.
  • There is a now independent Integrity Officer and Research Committee.
Additionally, the School of Social and Behavioral Sciences took two actions:
    •    They intensified classes on research ethics and research integrity.
    •    The dean instituted a "Science Committee" in the Spring of 2012.

This, it seems, was one of the best ideas they had. This committee is tasked with auditing a small sample (about 20 out of 500) of the articles published by members of the school each year. Their task is to assess the quality of the data storage and to look closely at how well the research methods are described. The committee thus learns where there are problems in preserving data, and advises the school's management team and the researchers about data storage, completeness of data sets, honoring subjects' privacy, access to data, and making the data available to others. They are not out to "witch-hunt" for fraudsters, but just to eyeball the data. That, however, keeps the various research groups on their toes and thinking about these aspects of their data before they publish. In turn, this creates a better research atmosphere.

Sijtsma has often been asked, why he didn't design a universal data storage system and data management policy first? Well, it seems he understands that computer systems are often too complex, take a lot of time, are very expensive, and tend to encounter unpleasant technical surprises. It would have taken too long, grass would have grown over the scandal, and the sence or urgency would have disappeared. So he installed the committee first. They set up rules and regulations, announced annual random audits. Now the groups were motivated to come up with a data policy that suited their needs best.

The worked quite well! Some groups are better than others, people tend to only arrange their storage only when they are audited. When they leave the school, they lose commitment. No consistent data storage system, but was a deliberate choice, so much more to do.

Do these interventions work?  He reports that they do. They won't prevent new affairs, but they do encourage RCR and reduce QRP (Questionable Research Practices). He also noted in the Q&A that the university decided not to rescind the doctorates of the people using Stapel's fraudulent data, as they did not know that the data was false.

For more information on the Stapel scandal, see the report: Flawed science: The fraudulent research practices of social psychologist Diederik Stapel.

Patricia Valdez, the extramural Research Integrity Officer(RIO) of the National Institute of Health spoke on the NIH Perspective on Research Integrity.

Her focus was on the reproducibility crisis, as the NIH invests $30 billion of taxpayer money annually. They don't want to waste money trying to reproduce something that is erroneous. They are focusing on evaluating the rigor of the methodology and the transparency of the research in the hope that this will have an effect on reproducability.

She referred to a 2017 book by Richard Harris, Rigor Mortis: How sloppy science creates worthless cures, crushes hope and wastes billions (Basic Books). The take-home message from the book is: Teach students methods the first year, not facts!

Ian Freckelton closed the session speaking on Research Misconduct and the Law: Intervening to Name, Shame and Deter. He is a lawyer (Queen's Counsel at the Victorian Bar in Australia) and a professor for Law and Psychiatry at the University of Melbourne.
He published a book in 2016 called Scholarly Misconduct and the Law (Oxford University Press).

After reading to us the most important bit from Stapel's book about the fraud (so we don't have to read it), he raced us through criminal law, which is invoked to shame and deter, as it has been applied to research misconduct. Then he spoke of a number of cases (I've put in links to press articles or the Wikipedia for more detail):
He also spoke about another book by Tom Nichols, The Death of Expertise (2017), about when experts lie, and noted that there are many other cases of fraud that have not reached the courts. He closed with the observation that the law is a very slow, blunt instrument and that criminal prosecution is not the answer, but notes that research fraud is not victimless. A court decision would, however, vindicate whistleblowers and hopefully present a high deterrence factor.

We then were ferried by boat through the canals of Amsterdam and the Amstel River to our dinner. I'll try to get a short description of day 4 out by tomorrow!

Monday, May 29, 2017

WCRI 2017, Day 2

Day 1                                                                                                            Day 3
Day 2 of the WCRI 2017 opened with a plenary session that was entitled "Transparency and Accountability." Boris Barbour (a neuroscientist with the École Normale Supérieure, Paris) introduced the PubPeer community and spoke about how they ensure academic quality. PubPeer has been online since 2012 and provides a sort of online journal club for discussing issues with published papers. Any publication with an identification number, such as a DOI, can be commented on. They have collected over 70 000 user comments about papers in 2 200 journals. Their main rules on comments:
  • Comments must be based on publicly verifiable information (personal communications do not count and will be removed)
  • There is a permanent right of reply for the authors
  • Show the original data
  • Community surveillance enforces following the rules
  • Remember, the publication was the author's choice, stay polite.
Of course, he remarked, if you don't want your research to be discussed, perhaps you shouldn't publish it.  He suggested these three blog posts for more reading about PubPeer:
Then Stephan Lewandowsky, a psychologist currently at the University of Bristol, spoke on "Being open but not naked: Balancing transparency with resilience in science."  He gave some examples of open data being, as he called it, "weaponized". It is, of course, clear that data can be twisted and misused, but I am not sure if that is a good reason to avoid open data. He ranted a bit about blogs and Twitter, and then noted: "Science should be open and transparent, but there is a distinction between science and noise, commercial interests, or political propaganda on the other. Openness and transparency aid the dissemination of political propaganda." His solution to the perceived problem with open data is establishing symmetry:
  • People who request data must be competent and must operate in an institutional context of accountability.
  • People who request data must preregister their intentions (and conform to them)
  • Participants' consent must be considered.
  • Data availability and limits should be enshrined in peer-review record at the time of publication.
I personally find this too narrow and open data very important. In particular, there are many good researchers outside of an institutional context, just as there are bad researchers within the institution. It's not just a question of the openness of the data.

Jet Bussemaker, the Minister of Science, Culture and Education in the Dutch Government, then spoke on The importance of independent research in today’s society. She gave an example of a publication by a Dutch researcher that turned out to be erroneous, and was retracted by the first author. Honesty is so important to academic integrity. She was adamant that government should not be in the business of regulating scientific conduct, that needs to be done by the scientists themselves.

The second plenary session was opened by the South African Minsiter of Science and Technology, Naledi PandorShe pointed out a number of issues: African scientists tend to be junior partners in collaborative research, not principal investigators. Researchers from around the world are glad to visit African countries, but not so keen on researching together. Despite many African research departments being underfunded, they do all they can to keep up with the Western world. There is an online review platform for research ethics committees, Rhinno, that is being used by many countries in Africa. She noted that although 10 % of the world's population lives in Africa, only 1 % of the clinical trials are held there and thus, the results may be skewed. She closed with noting that the empowerment of women is critical to development in Africa.

The plenary session was closed by a very brief talk by Robert-Jan Smits,
Directorate-General for Research and Innovation with the European Commission, on research integrity as a responsibility for everyone. He spoke of the EU platform about academic corruption ETINED (Ethics, Transparency, and Integrity in Education), but noted that the EU does not want to become the European science police department. Science must be built on trust.

After lunch there were five sessions in parallel in three blocks. In the first block I really wanted to hear 3 talks in 3 different rooms, but I ended up listening to 2 talks in one room, 2 in another.

Clemens Festen from Rotterdam in the Netherlands spoke about their new regulation for scanning all PhD theses with a so-called plagiarism detection system, after they had a severe case of plagiarism. It turned out to be too difficult, as the PhD-Theses were so large, even after removing all graphics and tables, which was a lot of work. As part of another investigation they ran 250 known duplicates through the system, and were surprised to find only half of them flagged by the system. So they have moved from focusing on finding plagiarism to letting the PhD students use the system on their work to see if the literature list is formatted properly, that is, someone else has formatted it just the same way.

Sven Hendrix from Hasselt in Belgium spoke about whistleblowers and the scientists they accuse both deserving protection, as even if the whistleblower is annoying, they may acutually be right with their allegations and the scientific record needs correcting. He himself was accused (and aquitted) of academic misconduct, so he is interested in writing about what to do when one is falsely accused of academic misconduct. He noted that
national and international, trustworthy independent institutions are needed where whistleblowers AND the accused scientists can get advice and counseling.

Ivan Oransky from RetractionWatch spoke about an investigation they did into attempting to find people who had been charged with a criminal offense for academic misconduct and sentenced to some sanction. They found 39 cases and classified them as directly involving academic misconduct (for example, falsifying drug test results), or indirectly (grant issues, attempting to bribe a government inspector inspecting the lab for safety violations), and one perimeter case in which a scientist ordered cyanide in order to kill his wife, obtained it because he was a scientist, and used it. He also noted the case in Italy in which scientists were charged for not warning about an earthquake, but this case has been dismissed by the Italian courts. 

Anisa Rowhani-Farid (Kelvin Grove, Australia) looked at how open data is provided by authors at the British Medical Journal in her PhD thesis. She screened for 160 articles that were data-based, and had been published since the BMJ started its open data policy. She found many excuses, was ignored, the published links did not work anymore, or she was told to apply for permission and told it would take 6-8 months to obtain access. She was only able to access 24 % of the data that was supposed to be available openly.

After coffee I joined the seminar on predatory publishing. Ana Marušic (Split, Croatia) was moderating, there were three speakers and a good discussion at the end.
  • David Moher from Ottawa, Canada asked if there are differences between open access journals and traditional subscription journals? They looked at 100 journals from the former Beall's list and 100 legitimate Open Access Journals and looked at 56 data points. They found many differences, and have posted a list of criteria of identifying such journals. 
  • Jocelyn Clark, Executive Editor of The Lancet, gave some insight as to why such journals are so popular in developing countries. There is a massively growing research output in these countries, an increasing pressure to disseminate and publish, a feudal publish or perish system, there is easy access to and targeted marketing of predatory journals, and unfortunately rather limited knowledge/training in publishing.
  • Jadranka Stojanovski (University of Zadar, Croatia) spoke of the many shades of journal publishing. Croatia spends fully 20 % of its research budget on subscriptions! She suggested a composite rating for journal quality based on efficiency, focus, impace, scope, and selectivity. 
During the lively discussion the point was made that we should perhaps not be talking about subscription and predatory publishers, but big-business-publishers and newcomers. The Leiden Manifesto for research metrics was mentioned, that involves 10 principles to guide research evaluation. It was noted that there are many parallels between contract cheating and publishing in predatory journals.

The final session I attended was "Re-thinking retractions" led by Elizabeth Moylan from BioMedical Central (SpringerNature) with Daniele Fanelli (Stanford University), Richard P. Mann (University of Leeds, UK), Ivan Oransky (RetractionWatch), and Virginia Barbour (past chair, COPE, UK). After each gave a short presentation, Daniele and Virginia on proposed changes and variations of retractions (Daniele's is under review, Virginia's on bioRxiv), Richard about having to retract a paper, and Ivan about their "Doing the right thing award" (DiRT), a good discussion ensued. There was much discussion about how to link articles with retractions and the various versions, whether it was really necessary to name different types of retractions, and a bit of a spat over whether is is usually the junior author who is "at fault" (neither side had evidence to cite). A final discussion on intent was nicely closed by Ivan, who noted that if you require absolute proof of intent in order to speak of  a fraudulent publication,  then you will never, ever retract a paper unless you have emails stating that someone wants to commit fraud. And if such emails exist, they would love to have them.

Tomorrow is another day packed with talks, I will be chairing a session so will not be able to report in too much detail on those talks. We are also having dinner together, so I may not get to blogging tomorrow. 

WCRI 2017, Day 1

                                                                                                                    Day 2
After the wonderful conference in Brno about plagiarism (days 1 - 2 - 3) I am now attending the 5th World Conference on Research Integrity 2017 at the Free University (VU) in Amsterdam. Today there were 9 pre-conference workshops and the opening session. I attended two half-day workshops, the opening session, and the reception. I will try to blog all of the sessions I attend, although so many interesting talks are in parallel - there are 5 parallel sessions, and they are necessary as there are over 800 people attending!

Workshop 6: How to investigate allegations of research misconductSession facilitators Paul Taylor (RMIT, Melbourne) & Daniel Barr (Deakin University)

Since I am often the person at VroniPlag Wiki who informs institutions of cases of research misconduct, I was very curious to hear from the other side what processes they (should) follow.

The first important point was understanding that because research is done by humans, there will be errors. There are also pressures that can cause some humans to respond in ways that others do not find acceptable. There was some discussion about what exactly is meant by "research misconduct" and if one should perhaps speak of "breach of research integrity" in order to move away from personal accusations towards a focus on the scientific record. If there are errors there, they must be corrected, preferably in a timely manner.

I found the questions asked of the institutions about their environment to be excellent:

    •    Is there a clear and available policy or process?
    •    Are there independent sources of advice?
    •    Are the right people providing this advice?
    •    Is there one place that receives complaints?
    •    Does the process include reporting back or publicly announcing the results?

I have often struggled to find the processes of various institutions, in particular the place to address my concerns. I also find that many institutions do not report back to me what they have decided, and more problematically, don't necessarily do anything to correct the scientific record because of legal issues.

It was clear that it is not easy to come up with policy and process that can cover every case - they are all so different. But splitting an investigation into two phases seems to be quite common. In the first phase, there is a preliminary assessment made: Does the complaint appear to have merit? Is it in our jurisdiction? If so, then there is sometimes a determination made if this complaint is made in good faith, or if it appears to be vexatious (a new adjective I learned today that totally fits the situation of A trying to point out errors in B's work, who is his bitter rival, or C raising a complaint for the 10th time with no new evidence). If an investigation is warranted, a report that includes all the evidence gathered up until now should be prepared. There are not necessarily hearings held at this point.  

Susan Zimmermann and Karen Wallace, from Canadian Secretariat on Responsible Conduct of Research (representing the three major funding agencies in Canada) gave a presentation and led the discussion on conducting the investigation. In a nutshell, this process is as follows:

    1.    Choose the right people to conduct the investigation
    2.    Gather relevant information
    3.    Make a finding
    4.    Prepare a report

One interesting point was that in Canada, in order to apply for funding from any of these three organizations, a researcher must agree that if found to have committed serious misconduct in such an investigation, he or she agrees that their personal information (name, type of misconduct, etc.) may be provided to the public. After all, they pay for this with their taxes. This makes it legal to publish names and findings.

Jillian Barr and Belinda Westmann from NHMRC (the Australian National Health and Medical Research Council) spoke about implementing the outcomes (a much better word than sanctions or punishments). In Australia there are funding agreements between NHMRC and the institutions receiving the funding as to how they must conduct investigations and how they implement outcomes and report back to the funding agency. In particular, if it had been determined that a publication is to be retracted, they want to see the retraction. If institutions do not cooperate, they lose the right to apply for funds.

There were many interesting topics touched on, and many interesting cases briefly mentioned.

After lunch I attended
Workshop 7: Teaching and training in RE/RI: The relevance of Moral Case Deliberation

Since I often write ethical case studies in computer science for a German-language computer science journal (the case studies are also published online at Gewissensbits), I wanted to hear more about this method of dealing with case studies.

The workshop was led by Guy Wissershoven, Fenneke Blom, & Giulia Inguaggiato,  from the Department of Medical Humanties at the VU Amsterdam. Guy and colleagues have developed a structured method of deliberating cases that involve dilemmata, in particular those encountered in clinical practice, especially in neonatology. There are a number of publications about this, for example Suzanne Metselaar, Bert Molewijk & Guy Widdershoven, Beyond Recommendation and Mediation: Moral Case Deliberation as Moral Learning in Dialogue in The American Journal of Bioethics.

This structured method of discussing a case with a group of people helps find a solution, as people tend to branch off onto other topics, or assume a know-it-all stance in suggesting solutions right away. The steps keep one focused on the dilemma at hand with its possible resolutions. It involves 7 steps:
  1. Case presentation
  2. Formulating the dilemma, the potential actions, and the harms that each action would incurr
  3. Asking questions for elucidation
  4. Analysing from various perspectives the values and norms involved (for example, for the value "respecting older people" there is the norm "I give my seat in a crowded bus to an older person who enters the bus")
  5. Individual judgements by each of the participants
  6. Dialogue about the judgements and potential repair mechanisms for the harms
  7. Evaluation of discussion
First, Guy presented such a case to the group of 20 persons at the workshop. Then we were split into two groups, and each group worked on one real dilemma. We promised to keep the dilemmata confidential, but there were quite lively discussions in both of the groups - it was hard to quit and gather back for some time of reflection!

Opening session

Lex Bouter from the VU (with his co-chairs Tony Mayer and Nick Steneck) opened the conference, welcoming over 800 participants from 52 countries.

The rector of the VU, Vinod Subramaniam, welcomed us and touched on many issues a university has to deal with today. It was good to see someone from the leadership of a university with so much understanding of the issues and that there are no easy answers to the problems. He noted that the Netherlands Code of Conduct for researchers is currently undergoing revision and should be published by 2018. The version from 2004 was last updated in 2012.

José van Dijck,  the president of the Royal Netherlands Academy of Arts and Sciences, gave a short talk about monitoring the research process. She formulated the motto "In Researchers we Trust (that is why we welcome everyone to monitor)".

The session closed with a play by Het Acteurgenootschap/Pandemonia: The ConScience App, a play about scientific integrity. It was long, but it sure packed a punch. There were so many issues about scientific integrity compressed into these few scenes. I spoke with the actors afterwards, they have spent over 2 years touring with this piece, in Dutch and in English, and speaking to audiences about it afterwards. A great way to get a discussion on this subject going, I think!

Then we had earned our Dutch specialties, cheese and bitterballs and herring and Jenever. I didn't manage to find the stroopwafels, more's the pity. It was wonderful stumbling onto people I've corresponded with over the years, and seeing some people again I haven't seen for a while. I had a nice chat walking back to the hotel, and since it was such a warm evening, many of us stood outside the hotel talking some more. I'm really looking forward to days 2-4, I hope I can keep up blogging!

Friday, May 26, 2017

Brno, Day 3

Day 2
The last day of the plagiarism conference in Brno - time has just flown by! It's been so wonderful to talk (and share some wine) with colleagues from around the world who are concerned with academic integrity. Here's a short overview of the talks I heard today:
  • Thomas Lancaster from Staffordshire University opened up the third day with his talk on "Rethinking Assessment by Examination in the Age of Contract Cheating." He first showed us some current newspaper articles about contract cheating, then ads from sites offering exam sit-ins, and all sorts of technology that can be used for cheating: Special cheating watches, mini-earpieces, a pen with a camera, boxer shorts (!!) with communication devices built in, and a mobile phone cover that looks like an ancient calculator and actually works, so that it passes a quick check by a proctor. There is quite a market for such tools, apparently. He also showed ads for people wanting others to take the exams for them that contract cheating sites insist that potential "authors" pass. So we have cheaters cheating to be employed as cheating enabelers .... He brought up an important question involving so-called "smart drugs" (Nootropics): Should the use of such drugs to enhance performance be considered cheating, as they are not available to everyone? It was noted that coffee and cigarettes can be considered a nootropic as well. 
  • Trudy Somers, from the online university Northcentral University suggested taking lessons from how businesses attempt to fight corruption and embezzlement. She notes that the Fraud Triangle or Diamond is used to explain situations in which this can occur: When there is pressure or incentives to do so, the person has the opportunity, there is a ready rationalization, and they have the capability to do so. 
  • Wendy Sutherland-Smith, from Deakin University in Australia, spoke about a system that is in place in 5 out of 6 Australian states: There are student advocates who are there to ensure that students do not face academic integrity hearings alone and know their rights as well as the formal procedures and range of potential outcomes. She notes that the largest problem students with integrity issues face is pressure and a fear of failure. Many in such a situation think that everyone else is cheating, and when they see others getting away with contract cheating, they rationalize (see above) that they can do it, too. Sie suggests introducing academic integrity modules in core units, increasing legitimate support (also for online students!), pressure governments for national legislation on contract cheating, and increase contract cheating awareness campaigns (there will be one in October, I didn't note the date, will add it here when I find it). She also suggested using technology for identification of students, I am quite opposed to such surveillence technologies, personally. She closed encouraging us to focus on EDD: Education, Deterrence and Detection, and to involve students in the issue, as they are our allies in the fight against contract cheating.
  • Veronika Králíová, a master's student of Tomáš Foltýnek, conducted an analysis of the ghostwriter market in Czechia. She was able to identify more than 100 sites, although it was not possible to determine if the same person or company was behind multiple sites.  She then looked at the log files for her university for three months and found tens of thousands of accesses to these sites. She also commissioned two papers (and the ethics of this was questioned during the discussion) and then surveyed people online to ask if they had ever used such a service. 8 % stated that they had, 60% of them had asked a friend or classmate, the rest used the services of a company. She suggests that, among other things, her university re-direct student attempts to access cheating sites to a page that informs the student about the legitimate help they can get at the university
  • Patrick Juola, from Duquesne University, in Pittsburgh, USA, spoke about using stylometrics to detect whether the authors of two papers are probably the same person or not. He introduced an interesting case he was involved with, determining that the author "Robert Galbraith" was most probably JK Rowling, the author of the Harry Potter books. After a newspaper picked this up, Rowling admitted that she was, indeed, the author. He emphasized that any seven word string that you write would most probably be unique unless you are quoting someone or using a set term or saying. I've been saying this for years, but no one believes me, so now I can quote Juola on it 😀. He has a company that offers authorship comparison services, and notes that determining multiple authorship is still a research question.
  • There was then a discussion panel on "Strategies for Addressing Contract Cheating" with Thomas Lancaster, Phil Newton, Shiva Sivasubramaniam, and Chloe Walker. I think we could have discussed with these four until sundown, at least, but there was only an hour available. An interesting discussion flared up over whether the outsourcing of writing work to services in disadvantaged countries is colonial exploitation. It was also noted that some students are getting overassessed, and the burden of grading all of these assessments increases the workload for the teachers. The topic of gift authorship was also briefly touched on. I think Chloe summed it up nicely when she said: Ethics gets subsumed to the practicalities of Real Life.
  • Teddi Fishman, the former director of the International Center of Academic Integrity had the job of summing up the conference. One of her important points is that we get bogged down in dealing with what we don't want: plagiarism, grade inflation, data manipulation, contract cheating. She suggested that we refocus our efforts on what we want: Skill acquisition, verifiable & trustworthy data, and learning. We have to require that the students participate actively in the learning, and we need to introduce more interactivity into the process, getting away from boring lectures. She strongly encouraged us to be brave and try out new formats of assessment, for example, students submitting videos of themselves doing what they need to be learning, or some such. And then to practice what she preached, she had someone prepare some slides she had not seen before, and she used them to sum up the conference, a sort of Powerpoint Karaoke. There were some really difficult pictures presented, but she always came up with something good! 
I read some of the papers for talks that I was not able to hear because they were in parallel sessions. I'd like to comment briefly on two of these here.
  • Marco Cosentino, Franca Marino, Chandana Haldar and Georges J. M. Maestroni give an account of the experience they had of being added as honorary authors to a (rather flawed) paper and having to expend much effort (and wait a long time) for a retraction to be published.
  • Julius Kravjar is looking to extend the thesis repository that he and colleagues run with their plagiarism detection system SVOP in Slovakia to a pan-European repository of theses and dissertations. He examines various issues that would have to be dealt with if there was to be such a repository. 
There were so many good discussions over the last three days, during sessions and during outings. For example, on the bus I discovered that the person sitting next to me, Erik Borg, is one of the chapter authors for a book that is in preparation! We've exchanged many emails but never met in person. There were also many representatives from various countries that are members of the Council of Europe who were there to learn. I find that quite heartening that they are planning on getting active about academic integrity! I didn't see any German officials, although there were participants from Germany, with talks and posters. I will try to spread the word about the European Network for Academic Integrity!

As a Swiss Army Knife-carrying person I was quite enchanted with these knives in chocolate:

Bizarrely, I had the following tweet in my timeline after tweeting up a storm the past three days on contract cheating:

I guess they didn't understand what I was tweeting about....