China sees dip in research-grant misconduct
By Jane Qui | December 31, 2014

Por Jane Qui | December 31, 2014

The country’s premier science-funding agency reports fewer offences than in previous years, but some say ethics problem is still 'rampant'.

China's zero-tolerance policy on scientific misconduct is starting to pay off, according the country's premier basic-research funding agency.

In 2014, the National Natural Science Foundation of China (NSFC) has seen fewer cases of scientific misconduct — especially plagiarism and duplicate submissions — in grant applications than it has in previous years, its president, Yang Wei, said at a 30 December press conference in Beijing.

The NSFC received 206 misconduct allegations this year, including 66 suspicious cases flagged by plagiarism-detection software, which the agency began using in 2012. Investigation by its oversight committee confirmed that 33 of those cases included misconduct such as falsification, fabrication and plagiarism, purchasing grant proposals and use of false personal information.

Of these, the applications that were still pending were invalidated, and research funding was revoked in four cases in which it had been granted, says Chen Yiyu, former NSFC director and chair of the committee. Seven names of offenders and their wrongdoings were released at the 30 December press conference; five more cases will be made public on the agency’s website in the coming months.

The tallies for 2014 are not yet complete, so more cases may yet surface, says Chen Yue, vice-director of the agency’s oversight and audit bureau. But the current numbers suggests a reduction in offences in 2014 compared to the period 2010–2013, during which Chen Yiyu says there was an average of 49 cases per year.

Yang attributes this decline to the agency’s zero-tolerance attitude towards misconduct and the efforts in the past few years to root out offences. “Scientific integrity is the bottom line,” he says. The message the agency wants to convey, he says, is that “you are out once you cross the line”.

But others say that the data are not necessarily indicative of a broader positive trend. “The cases uncovered by NSFC are just the tip of the iceberg, given how rampant the problem is in China,” says Mu-ming Poo, director of the Institute of Neuroscience at the Chinese Academy of Science in Shanghai.

Progress report

In a bid for greater transparency and better education on scientific integrity, the NSFC began in 2013 to hold regular press conferences to release investigation results and to publicize serious offences.

Among the seven cases made public in the 30 December press conference, four involved plagiarism of previously funded applications, including purchasing grant proposals online. In the other cases, researchers falsified their ages, job titles, CVs or even identities to qualify for grants.

The NSFC, which has a budget of US$3.1 billion, is the largest basic-research funding agency in China. It is also the only one so far that publishes data about its misconduct investigations. “It’s certainly a step in the right direction,” says Poo.“Other government agencies, universities and research institutes should follow suit,” he says.

Still, Poo adds, “the agency hasn’t gone far enough in its transparency and zero-tolerance policies”. For instance, only one-third of the misconduct incidents this year will be posted on the agency’s website, and no institutions were mentioned in the cases made public at the press conference. “Naming and shaming have a powerful deterrent effect, especially given the strong ‘face-saving’ culture in the country,” says Poo. “This will also put pressure on institutions to clean up the act of their researchers.”

In a 2013 survey by the Chinese Association of Science and Technology (CAST) in Beijing, 56% of 33,000 respondents reported that they had witnessed misconduct. The data suggested that nearly half of researchers — and more than two-thirds of graduate students, postdoctoral researchers and junior academics — have a tolerant attitude towards research misconduct, says Wang Chunfa, CAST’s vice-secretary in charge of the survey, which takes place every five years.

“In many cases, principal investigators are not considered to be responsible for fraud committed by people in their research group and often go unpunished,” says Poo. “Graduate students and postdocs often become the scapegoat of senior faculty.”

The NSFC would not disclose the proportion of junior and senior scientists in its confirmed misconduct cases. Chen Yue says that the limited cases uncovered by NSFC are not enough to pinpoint a possible correlation between misconduct and career stage.

According to regulations enacted in 2007 by China’s State Council, researchers found guilty of misconduct can be banned from receiving NSFC funding for up to seven years — which Yang sees as possibly too lenient for some of the severe cases he has come across.

Photo: Yang Wei, president of the National Natural Science Foundation of China says that his agency's efforts to crack down on academic fraud are beginning to pay off. - By Qilai Shen/In Pictures/Corbis

From: Nature

O que define um autor
Dezembro de 2014 | em Revista Pesquisa FAPESP

boas-praticas-a-300Dezembro de 2014 | em Revista Pesquisa FAPESP 

Um documento destinado a editores de revistas científicas ofereceu uma série de orientações para evitar disputas e dilemas éticos envolvendo a atribuição  de autoria depapers. Divulgado em setembro por um grupo de trabalho do fórum Committee on Publication Ethics (Cope), sediado em Londres, o texto sugere  que cada revista defina claramente os parâmetros que considera necessários para um pesquisador assinar um artigo – e os exponha em seuwebsite. Se as regras forem inspiradas nas de alguma instituição ou sociedade científica, isso também deve ser declarado.

Outra precaução importante  é exigir que todos os autores assinem uma declaração  de responsabilidade. A maioria das revistas já toma esse cuidado, mas o Cope definiu quatro requisitos para não esquecer: 1) que todos os autores cumpram os requisitos exigidos pela revista; 2) que todos se responsabilizem pela integridade da pesquisa; 3) que não sejam omitidos nomes de outros indivíduos qualificados para serem autores do artigo; 4) que seja declarada a contribuição de cada um dos autores para a concepção e elaboração do artigo. Aconselha-se, ainda, que as revistas enviem correspondência para todos os autores citados, para garantir que todos consentiram em assinar opaper.

De modo geral, há consenso de que o autor é aquele que dá uma real contribuição intelectual para o trabalho científico, participando de sua concepção, execução, análise e redação dos resultados, aprovando seu conteúdo final. No documento, o Cope ressalta que indivíduos cujas contribuições se encaixem em algum, mas  não em todos os parâmetros de autoria, sejam citados nos agradecimentos, assim como aqueles que ajudaram a obter recursos e infraestrutura, mas  não participaram da pesquisa.

O grupo de trabalho admite  que não tem respostas para todas as controvérsias envolvendo a atribuição de autoria. Cita, por exemplo, as diretrizes do International Committee of Medical Journal Editors (ICMJE), segundo as quais um autor deve ser responsável por todos os aspectos dopaper, a fim de garantir que as questões relacionadas à exatidão e à integridade de qualquer parte do trabalho foram resolvidas. Isso, diz o grupo de trabalho, pode ser problemático em estudos multidisciplinares, nos quais pesquisadores compreendem em profundidade apenas as suas contribuições parciais. Outra lacuna nas diretrizes do ICMJE está relacionada à exigência de que todos os autores aprovem a versão final do que será publicado – um dos receios é que, em artigos com muitas assinaturas, algum autor faça exigências exageradas ou descabidas que se tornem um obstáculo para a divulgação do artigo. O documento do Cope está disponível aqui.

Imagem: Por Daniel Bueno

Fonte: Revista Pesquisa FAPESP

Journalists to Catalog Retractions
By Kerry Grens | December 16, 2014

Staff of the blog Retraction Watch will create a database of papers retracted from the scientific literature.

 

Thanks to a grant from the MacArthur Foundation, co-founders of the Retraction Watch blog, Adam Marcus and Ivan Oransky, along with their colleagues are creating a database of journal retractions. “The goal of the grant—$200,000 per year for two years—is to create a comprehensive and freely available database of retractions, something that doesn’t now exist,” Oransky announced on the blog.

The Retraction Watch team claims to already catalog two-thirds of retractions as they occur; “we’ll need more resources to be comprehensive.” 

According to Bio-IT World, Retraction Watch “has been a beloved resource in the scientific community, focused on retractions in the literature. From the beginning, Retraction Watch has tried to dig past the brief and sometimes opaque notices of retraction published by journals, contacting authors and editors to get the real scoop on why retractions have occurred.”

Having a searchable database of retractions could help scientists in preparing manuscripts or even experiments, but as the Covering Health blog noted, “it certainly sounds like it will be a useful tool for reporters to check the credibility of studies or sources they are using.”

Oransky was an editor at The Scientist from 2002 to 2008.

 

Photo: Wikimedia, Niklas Bildhauer

From: The Scientist

For Sale: “Your Name Here” in a Prestigious Science Journal
By Charles Seife | December 17, 2014

An investigation into some scientific papers finds worrying irregularities


Klaus Kayser has been publishing electronic journals for so long he can remember mailing them to subscribers on floppy disks. His 19 years of experience have made him keenly aware of the problem of scientific fraud. In his view, he takes extraordinary measures to protect the journal he currently edits, Diagnostic Pathology. For instance, to prevent authors from trying to pass off microscope images from the Internet as their own, he requires them to send along the original glass slides.

Despite his vigilance, however, signs of possible research misconduct have crept into some articles published in Diagnostic Pathology. Six of the 14 articles in the May 2014 issue, for instance, contain suspicious repetitions of phrases and other irregularities. When Scientific American informed Kayser, he was apparently unaware of the problem. "Nobody told this to me," he says. "I'm very grateful to you."

Diagnostic Pathology, which is owned by Springer, is considered to be a reputable journal. Under Kayser’s stewardship, its “impact factor”—a crude measure of a journal's reputation, generated by number of times the article is cited in the published scientific literature—is 2.411, which puts it solidly in the top quarter of all scientific journals tracked by Thomson Reuters in its Journal Citation Reports, and 27th out of the 76 ranked pathology journals.

Kayser’s journal is not alone. In the past few years similar signs of foul play in the peer-reviewed literature have cropped up across the scientific publishing world—including those owned by publishing powerhouses Wiley, Public Library of Science, Taylor & Francis and Nature Publishing Group (which publishes Scientific American).

The apparent fraud is taking place as the world of scientific publishing—and research—is undergoing rapid change. Scientists, for whom published articles are the route to promotion or tenure or support via grants, are competing harder than ever before to get their articles into peer-reviewed journals. Scientific journals are proliferating on the Web but, even so, supply is still unable to keep up with the ever-increasing demand for respectable scientific outlets. The worry is that this pressure can lead to cheating.

The dubious papers aren't easy to spot. Taken individually each research article seems legitimate. But in an investigation by Scientific American that analyzed the language used in more than 100 scientific articles we found evidence of some worrisome patterns—signs of what appears to be an attempt to game the peer-review system on an industrial scale.

For example, one of the articles published in the May 2014 Diagnostic Pathology looks on the surface like a typical meta-analysis of the peer-reviewed literature. Its authors—eight scientists from Guangxi Medical University in China—assess whether different variations in a gene known as XPC can be linked to gastric cancer. They find no such link, and concede that their paper isn't the final word on the matter:

“However, it is necessary to conduct large sample studies using standardized unbiased genotyping methods, homogeneous gastric cancer patients and well-matched controls. Moreover, gene–gene and gene–environment interactions should also be considered in the analysis. Such studies taking these factors into account may eventually lead to our better, comprehensive understanding of the association between the XPC polymorphisms and gastric cancer risk.”

A perfectly normal conclusion for a perfectly ordinary paper. It is nothing that should set off any alarm bells. Yet, compare it with a paper published several years earlier in the European Journal of Human Genetics (which is owned by Nature Publishing Group), a meta-analysis of whether variations in a gene known as CDH1 could be linked to prostate cancer (PCA):

“However, it is necessary to conduct large trials using standardized unbiased methods, homogeneous PCA patients and well-matched controls, with the assessors blinded to the data. Moreover, gene–gene and gene–environment interactions should also be considered in the analysis. Such studies taking these factors into account may eventually lead to our better, comprehensive understanding of the association between the CDH1−160 C/A polymorphism and PCA risk.”

The wording is almost identical, down to the awkward phrase, "lead to our better, comprehensive understanding." The only substantial differences are the specific gene (CDH1 rather than XPC) and the disease (gastric cancer rather than PCA).

This is not a simple case of plagiarism. Many seemingly independent research teams have been plagiarizing the same passage. An article in PLoS ONE may eventually lead to "our better, comprehensive understanding" of the association between mutations in the XRCC1 gene and thyroid cancer risk. Another in the International Journal of Cancer (published by Wiley) might eventually lead to "our better, comprehensive understanding" of the association between mutations in the XPA gene and cancer risk—and so on. Sometimes there are minor variations in the wording but in more than a dozen articles we found almost identical language with different genes and diseases seemingly plunked into the paragraph, like an esoteric version of Mad Libs, the parlor game in which participants fill in missing words in a passage.

We have found other examples of fill-in-the-blanks research. A search for the phrase "excluded because of obvious irrelevance" retrieved more than a dozen research articles of various types—all but one written by scientists from China. "Using a standardized form, data from published studies" also yields more than a dozen research articles, all from China. "Begger's funnel plot" gets dozens of hits, all from China.

“Beggers funnel plot” is particularly revealing. There is no such thing as a Beggers funnel plot. "It doesn't exist. That's the point," says Guillaume Filion, a biologist at the Center for Genomic Regulation in Barcelona, Spain (pdf). A statistician named Colin Begg and another statistician named Matthias Egger each invented tests and tools to look for biases that creep into meta-analyses. "Begger's funnel plot" appears to be an accidental hybrid of the two names.

Filion spotted the proliferation of "Begger's" tests by accident. While looking for trends in medical journal articles, he found papers that had almost identical titles, similar choices in graphics and the same quirky errors, such as "Begger's funnel plot." He reckons that the papers came from the same source, even though they are ostensibly written by different groups of authors. "It's difficult to imagine that 28 people independently would invent the name of a statistical test," Filion says. "So that's why we were very shocked."

A quick Internet search uncovers outfits that offer to arrange, for a fee, authorship of papers to be published in peer-reviewed outlets. They seem to cater to researchers looking for a quick and dirty way of getting a publication in a prestigious international scientific journal.

In November Scientific American asked a Chinese-speaking reporter to contact MedChina, which offers dozens of scientific "topics for sale" and scientific journal "article transfer" agreements. Posing as a person shopping for a scientific authorship, the reporter spoke with a MedChina representative who explained that the papers were already more or less accepted to peer-reviewed journals; apparently, all that was needed was a little editing and revising. The price depends, in part, on the impact factor of the target journal and whether the paper is experimental or meta-analytic. In this case, the MedChina rep offered authorship of a meta-analysis linking a protein to papillary thyroid cancer slated to be published in a journal with an impact factor of 3.353. The cost: 93,000 RMB—about $15,000.

The most likely intended outlet for the MedChina-brokered paper is Clinical Endocrinology. It is one of five journals with an impact factor of 3.353 and the closest in subject matter. "Obviously, it's a matter of great concern," says John Bevan, a senior editor at the journal. "I'm distraught to think of this going on and flooding the market." Approximately two weeks after being contacted by Scientific American Bevan confirmed that a suspicious-looking article about biomarkers for papillary thyroid cancer—and which had an author added during the paper revisions—was identified and rejected.

Much of the funding for these suspect papers comes from the Chinese government. Of the first 100 papers identified by Scientific American, 24 had received funding from the National Natural Science Foundation of China (NSFC), a governmental funding agency roughly equivalent to the U.S.'s National Science Foundation. Another 17 acknowledged grants from other government sources. Yang Wei, president of NSFC, confirmed that the 24 suspicious papers identified by Scientific American were subsequently referred to the Foundation's Bureau of Discipline, Inspection, Supervision and Auditing (pdf), which investigates several hundred allegations of misconduct each year. "Tens of disciplinary actions have been taken by NSFC annually for research misconduct, though cases of ghostwriting are less common," Yang e-mailed. Last year one of the agency's disciplinary actions involved a scientist who purchased a grant proposal from an Internet site. Yang stresses that the agency takes steps to combat misconduct, including the recent installation of a "similarity check" for possible plagiarism in grant proposals. (In the year since the system went online the check found several hundred cases of "considerable similarities" out of some 150,000 grant applications, Yang claims.) But when it comes to paper mills, Yang says, "we do not have much experience about this issue and are certainly glad to listen to your suggestions."

Some publishers are only now catching up to the problem of Chinese paper mills. "I wasn't aware there was a market out there for authorship," says Jigisha Patel, BioMed Central's associate editorial director for research integrity. Now that BioMed Central (which is owned by Springer and publishes Diagnostic Pathology) has been alerted to the issue, Patel says,"we now can look into it and address it." Within two weeks of being contacted by Scientific American, BioMed Central announced that it had identified roughly 50 manuscripts that had been assessed by phony peer reviewers. The publisher told the Retraction Watch blog that "a third party may be involved, and influencing the peer review process." It is possible that these manuscripts came from paper mills. We were able to look at the titles and authors of about half a dozen of those papers. All appear very similar in style and subject matter to other paper mill-written meta-analyses, and all were from groups of Chinese authors.

Other publishers have begun to combat the flood of dubious papers. Damian Pattinson, editorial director of PLoS ONE, says the journal instituted safeguards last April. "[E]very meta-analysis we get has to go through a specific editorial check..." that forces authors to provide additional information, including a justification for why they performed the study in the first place, he says. "As a result of this, the rate of papers that are actually getting to reviewers has dropped by about 90 percent. So we are very aware of this issue." Even so, the list compiled by Scientific American contains four suspect papers that were published in PLoS ONE after the safeguards were instituted, and authorship on an upcoming PLoS ONE article was put up for sale by MedChina as this article was being written. When we asked Pattinson about these, he replied: “We will correct and retract papers if there is any indication of misconduct. It’s a problem issue and one that we’re very aware of.”

BMC, Public Library of Science and other the publishers use plagiarism-checking software to try to cut down on fraud. Software, however, doesn't always solve the issue of plagiarism in journals, Patel warns, paper mills “add another layer of complexity to the problem. It's very worrying."

Publishers at the moment are fighting an uphill battle. "Without insider information it's very difficult to police this," Clinical Endocrinology's Bevan says. CE and its publisher, Wiley, are trying to close loopholes in the editorial process to flag suspicious late changes in authorship and other irregularities. "You have to accept that people are submitting things in good faith and honesty," Bevan says.

That is the essential threat. Now that a number of companies have figured out how to make money off of scientific misconduct, that presumption of honesty is in danger of becoming an anachronism. "The whole system of peer review works on the basis of trust," Pattinson says. "Once that is damaged, it is very difficult for the peer review system to deal with."

"We've got a problem here," Filion says. He believes that the deluge is just beginning. "There is so much pressure and so much money at stake that were going to see all sorts of excesses in the future."

Additional reporting by Paris Liu.

The list below is 100 published articles that would seem to have the hallmarks of fill-in-the-blanks science. Inclusion in this list does not imply that any given article was written by a paper mill nor does it imply that the article is definitely plagiaristic. Given the pattern of writing and these articles' similarities to previously published work, however, we believe they are worthy of scrutiny by their publishers. >>View the list

There are many more suspicious articles out there; and more are being published every day. These are simply the first 100 we found.

Further Reading:
Filion, Guillaume. "A flurry of copycats on PubMed."

Oransky, Ivan. "Publisher discovers 50 manuscripts involving fake peer reviewers."

Ioannidis J.P.A., Chang C. Q., Lam T. K., Schully S. D., Khoury M. J. "The Geometric Increase in Meta-Analyses from China in the Genomic Era." PLoS ONE 8(6): e65602. doi:10.1371/journal.pone.0065602

Hvistendahl, Mara. "China's Publication Bazaar." Science, 29 November 2013, pp. 1035–1039. DOI: 10.1126/science.342.6162.1035

 

Photo: In the past few years signs of foul play in the peer-reviewed literature have cropped up across the scientific publishing world - Credit: Mike Watson Images/Thinkstock

From: Scientific American

 

Study of massive preprint archive hints at the geography of plagiarism
By John Bohannon | December 11, 2014

By John Bohannon | 11 December 2014

New analyses of the hundreds of thousands of technical manuscripts submitted to arXiv, the repository of digital preprint articles, are offering some intriguing insights into the consequences—and geography—of scientific plagiarism. It appears that copying text from other papers is more common in some nations than others, but the outcome is generally the same for authors who copy extensively: Their papers don’t get cited much.

Since its founding in 1991, arXiv has become the world's largest venue for sharing findings in physics, math, and other mathematical fields. It publishes hundreds of papers daily and is fast approaching its millionth submission. Anyone can send in a paper, and submissions don’t get full peer review. However, the papers do go through a quality-control process. The final check is a computer program that compares the paper's text with the text of every other paper already published on arXiv. The goal is to flag papers that have a high likelihood of having plagiarized published work.

"Text overlap" is the technical term, and sometimes it turns out to be innocent. For example, a review article might quote generously from a paper the author cites, or the author might recycle and slightly update sentences from their own previous work. The arXiv plagiarism detector gives such papers a pass. "It's a fairly sophisticated machine learning logistic classifier," says arXiv founder Paul Ginsparg, a physicist at Cornell University. "It has special ways of detecting block quotes, italicized text, text in quotation marks, as well statements of mathematical theorems, to avoid false positives."

Only when there is no obvious reason for an author to have copied significant chunks of text from already published work—particularly if that previous work is not cited and has no overlap in authorship—does the software affix a “flag” to the article, including links to the papers from which it has text overlap. That standard “is much more lenient" than those used by most scientific journals, Ginsparg says.

To explore some of the consequences of "text reuse," Ginsparg and Cornell physics Ph.D. student Daniel Citron compared the text from each of the 757,000 articles submitted to arXiv between 1991 and 2012. The headline from that study, published Monday in theProceedings of the National Academy of Sciences(PNAS) is thatthe more text a paper poaches from already published work, the less frequently that paper tends to be cited. (The full paper is also available for free on arXiv.) It also found that text reuse is surprisingly common. After filtering out review articles and legitimate quoting, about one in 16 arXiv authors were found to have copied long phrases and sentences from their own previously published work that add up to about the same amount of text as this entire article. More worryingly, about one out of every 1000 of the submitting authors copied the equivalent of a paragraph's worth of text from other people's papers without citing them.

So where in the world is all this text reuse happening? Conspicuously missing from thePNASpaper is a global map of potential plagiarism. Whenever an author submits a paper to arXiv, the author declares his or her country of residence. So it should be possible to reveal which countries have the highest proportion of plagiarists. The reason no map was included, Ginsparg toldScienceInsider, is that all the text overlap detected in their study is not necessarily plagiarism.

Ginsparg did agree, however, to share arXiv’s flagging data withScienceInsider. Since 1 August 2011, when arXiv began systematically flagging for text overlap, 106,262 authors from 151 nations have submitted a total of 301,759 articles. (Each paper can have many more co-authors.) Overall, 3.2% (9591) of the papers were flagged. It's not just papers submitted en masse by a few bad apples, either. Those flagged papers came from 6% (6737) of the submitting authors. Put another way, one out of every 16 researchers who have submitted a paper to arXiv since August 2011 has been flagged by the plagiarism detector at least once.

The map above, prepared byScienceInsider, takes a conservative approach. It shows only the incidence of flagged authors for the 57 nations with at least 100 submitted papers, to minimize distortion from small sample sizes. (In Ethiopia, for example, there are only three submitting authors and two of them have been flagged.)

Researchers from countries that submit the lion's share of arXiv papers—the United States, Canada, and a small number of industrialized countries in Europe and Asia—tend to plagiarize less often than researchers elsewhere. For example, more than 20% (38 of 186) of authors who submitted papers from Bulgaria were flagged, more than eight times the proportion from New Zealand (five of 207). In Japan, about 6% (269 of 4759) of submitting authors were flagged, compared with over 15% (164 out of 1054) from Iran.

Such disparities may be due in part to different academic cultures, Ginsparg and Citron say in theirPNASstudy. They chalk up scientific plagiarism to "differences in academic infrastructure and mentoring, or incentives that emphasize quantity of publication over quality."

From: AAAS

Especialistas debatem ética e integridade na pesquisa
Por Viviane Monteiro | 24 de Novembro de 2014

Por Viviane Monteiro | 24 de Novembro de 2014 

Ética e integridade na ciência têm ocupado lugar privilegiado nas instituições de pesquisas e em agências de fomento no mundo

Em momento de crescimento da produção científica nacional, especialistas chamam a atenção para o tema ética e integridade na ciência do Brasil, na esteira da tendência mundial. Pesquisadores da Unifesp, da UFRJ e PUC-RS, além da presidente da Sociedade Brasileira para o Progresso da Ciência (SBPC), Helena Nader se reuniram na última terça-feira, 18, na Unifesp, para discutir a temática “Reconfigurando a Ciência para o século XXI: integridade na ciência.”

As partes aproveitaram o encontro para divulgar a 4th World Conference on Research Integrity, a se realizar pela primeira vez no Brasil, entre 31 de maio a 03 de junho de 2015, no Rio de Janeiro. O papel da integridade científica nos diversos sistemas de pesquisa, em vários países, será um dos principais pontos da pauta do congresso, realizado pela primeira vez em Lisboa, em 2007, depois em Cingapura, em 2010, e em Montreal, em 2013.

As discussões na Unifesp centraram em  questões emergentes sobre a integridade científica. Em especial, na relação entre qualidade e financiamento de projetos, na comunicação responsável de resultados de pesquisa, além de questões culturais associadas a problemas de má-conduta científica, incluindo o plágio.

No Brasil, dados precisos sobre a incidência de casos de plágio e outras formas de má conduta na ciência ainda são escassos, por falta de estudos mais abrangentes, observa a pesquisadora Rosemary Shinkai, professora da PUC-RS, editora-chefe da Revista Odonto Ciência (Journal  of Dental Science) e membro do Conselho Deliberativo do Committee on Publication Ethics (COPE).

“Hoje os dados que temos sobre o Brasil são relacionados a trabalhos publicados em revistas estrangeiras indexadas em bases internacionais. Mas na maioria das revistas publicadas internamente ainda não há dados, porque falta um controle de qualidade editorial profissionalizado”, disse Rosemary, que discorreu sobre o tema: Integridade na Pesquisa e Ética  na Publicação Científica.

Segundo ela, muitos casos de má-conduta são detectados por editores, revisores e leitores durante e após a publicação. Citou exemplos de má-conduta em pesquisa a falsificação e manipulação de imagens. No caso de má-conduta na publicação citou, por exemplo, plágio e publicação redundante.

Integridade no financiamento

Conforme os organizadores do evento, o tema ética e integridade na ciência tem ocupado um lugar privilegiado nas instituições de pesquisa e em agências de fomento no mundo, nas últimas décadas.

Organizadora do debate, Marisa Russo, professora da Unifesp, disse que a intenção do debate é promover uma sensibilização na comunidade científica, sobretudo em São Paulo, sobre a integridade e ética na ciência, já que esse é um tema discutido mundialmente. Ela fez questão de citar que a Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp) tem incentivado às universidades, pesquisadores e alunos para que haja “sensibilidade” sobre integridade na ciência quando se alinharem a agências de fomento. Ou seja, solicita que essa “sensibilização” seja prioridade no financiamento à pesquisa e nos projetos.

Com a mesma visão, a pesquisadora Sonia Vasconcelos, professora da UFRJ, destacou a  atenção que vem sendo dada à transparência no financiamento da pesquisa no cenário mundial. Acrescentou que políticas sobre integridade na pesquisa passaram a fazer parte de agenda de vários países nos últimos 15 anos.

Ao discorrer sobre o tema “Conduta responsável na pesquisa: consensos e conflitos”, Sônia, também vice-diretora da Câmara Técnica de Ética em Pesquisa da UFRJ, disse que um dos consensos hoje é de que a pesquisa precisa ser mais colaborativa. Aliás, frisou que a solução de problemas éticos associados a redes colaborativas de pesquisa pode ser considerada um dos grandes desafios com os quais se deparam várias dessas redes, em áreas de saúde, por exemplo.
Ciência aberta

Outro consenso mundial, segundo disse, é o reconhecimento da importância de benefícios da ciência aberta (open science), que incentiva o compartilhamento de dados de pesquisa visando a solução de problemas de amplo interesse o que, claramente, segundo Sonia, “expande o modelo mais convencional de comunicação científica.”

Nesse caso, Sonia citou como exemplo a Inglaterra que tem investido consideravelmente na ciência aberta, sinalizando um ponto positivo, já que isso contribui para o avanço do sistema.  Ainda assim, a pesquisadora declarou que apesar de haver o reconhecimento internacional do papel relevante da ciência aberta, não significa que os países percebem esse modelo da mesma forma.

“Cada país lida com esses desafios de forma diferente. Por exemplo, o que é bom para Inglaterra necessariamente não é bom para o Brasil, na mesma medida. Mas esse compartilhamento em rede vem sendo incentivado”, disse Sonia, citando um documento detalhado sobre o tema produzido pela Royal Society.

Preocupações na China

Embora a curva de investimento em pesquisa e desenvolvimento (P&D) na China ameace ultrapassar a dos EUA, em termos de crescimento, Sonia disse que esse avanço chinês não é acompanhado de ações concretas sobre a integridade na pesquisa.

Ela citou exemplo de um artigo publicado recentemente em um periódico em que uma editora chinesa demonstra preocupação sobre o cenário da China,  diante do crescimento significativo do investimento do PIB em pesquisa e desenvolvimento. Outra preocupação apontada no artigo, segundo Sonia, é que investimentos em inovação ainda não acontecem em ritmo desejado para fomentar o desenvolvimento da criatividade dos jovens talentos da China.

Sonia fez questão de frisar que, no cenário da inovação, o estímulo para que os alunos chineses exercitem a argumentação de ideias e o debate aberto foi apresentado como um dos desafios no artigo.

Sonia ainda acrescentou: “em alguns programas em que alunos chineses participam nos EUA e no Canadá é reportada a dificuldade que esses alunos têm de elaborar um texto original, por exemplo.”

Confiança pública

Outro consenso mundial, conforme a pesquisadora da UFRJ, é a importância do fomento à confiança pública na ciência. Nesse caso, Sonia declarou que a comunicação responsável de resultados de pesquisa recebeu, inclusive, atenção especial em um documento internacional produzido pelas InterAcademies.

Em outra frente, a pesquisadora da PUC-RS, Rosemary, disse que os problemas detectados no que se refere à ética nas publicações “são o topo do iceberg”. Ela sugere estimular a ética e integridade na pesquisa desde a iniciação científica. Ou seja, desde a formação do pesquisador.

“Se o problema é detectado depois da publicação de um artigo ou então da divulgação científica, já estamos falhando, do ponto de vista do educador, porque é muito tarde. O ideal é que trabalhemos com alunos e pesquisadores para evitar que esses problemas ocorram.”

Para resolver tal problema, Rosemary defende uma articulação entre órgãos como o Ministério da Educação (MEC), Ministério de Ciência, Tecnologia e Inovação (MCTI), Ministério da Cultura e com agências d fomento que têm influência considerável sobre os pesquisadores. “É preciso também articulação das sociedades cientificas, de revistas, de editorias, leitores, e todo o conjunto da sociedade geral.”

Fonte: Jornal da Ciência