Negida Community Membership

Starting at $0/mo
There are several ways in which you can become a member of Negida Academy. ๐Ÿ”ฅ BASIC Membership ๐Ÿ”ฅ (Free)Weekly posts from Dr. Negida about clinical research skillsAnnouncements of external opportunities for traineesNegida Handbook of Clinical Research - Part 1, AKA METHODOLOGYNegida Handbook of Clinical Research - Part 2, AKA BIOSTATISTICAL (coming soon) ๐Ÿฅ‡ PREMIUM Membership ๐Ÿฅ‡ย (coming soon in 2022) ๐Ÿ‘‘ GOLD Membership...

ู…ู…ูƒู† ุทู„ุจ ู†ู‚ู„ ู„ู„ู…ูˆู‚ุน ุงู„ุฌุฏูŠุฏุฏ

ุงู†ุง ู…ุณุฌู„  ู…ุน ุญุถุฑุชูƒ ููŠ ูƒูˆุฑุณ. Summer. ุงู„ุฌุงูŠ  ุจุณ ู„ุณู‡ ู…ู†ุถู…ูŠุชุด ููŠ ุงู„ู…ูˆู‚ุน ุงู„ุฌุฏูŠุฏ  
ูˆ ูƒู†ุช ุจู„ุบุช ุงู„ู…ุณุคูˆู„ูŠู† ุนู„ูŠ ุงู„ุงูŠู…ูŠู„ ู…ู† ูุชุฑู‡ ูˆ ู„ุณู‡ ุจุฑุถูˆ ู…ููŠุด ุฌุฏูŠุฏ  ูŠุง ุฑูŠุช ุญุฏ ุงูŠุญู„ู„ูŠ ุงู„ู…ุดูƒู„ู‡ ุฏูŠ  ูˆ ุฌุฒูŠู„ ุงู„ุดูƒุฑ ู…ุณุจู‚ุง 
Negida Academy
Jan 14, 2023
The extra resources, such as the free books, were also added to your account on 9th January. 

ุทู„ุจ ู†ู‚ู„ ู„ู„ู…ูˆู‚ุน ุงู„ุฌุฏูŠุฏ

ุงู†ุง ู…ุณุฌู„ ูู‰ ุงู„ูƒูˆุฑุณ ูˆูƒู†ุช ุนุงูˆุฒ ุงู†ู‚ู„ ุนู„ู‰ ุงู„ู…ูˆู‚ุน ุงู„ุฌุฏูŠุฏ ุจุณ ู…ุด ุนุงุฑู ุงุฒุงู‰
Negida Academy
Jan 14, 2023
Please send an email to ncrt@negida.com

Summer 2022 - instructions were emailed

Dear Colleagues,

The summer 2022 season starts tomorrow. We have sent you a long email with instructions about the course lectures and how to join. Emails were sent to all subscribers who have subscribed to the courses in Summer 2021, Winter 2022, and Summer 2022.

If you did not see the email, first try to search in your inbox and junk folder for "ncrt@negida.com". If you can not find it, make sure your email notifications are turned on. Follow these instructions to turn on your notifications.

See you tomorrow at 5:00 PM Cairo time.

BW
Ahmed

C1 book

Greetings Dr. Ahmed
I've just started the C1 course and i need a book to follow up and study the content, please reply with the book as soon as possible.
Best wishes
Amgad medhat

Munsour Agroum
May 12, 2022
 
You can see negida books , they are free .
Ahmed Negida
Jun 2, 2022
Exactly

Post-NCRT Clinical Research Internship Program

We are happy to announce the Post-NCRT clinical research internship program. This is a 4-month internship program where participants (after finishing the NCRT courses) will be organized in mini teams and every team will be assigned a research project. Hazem Ghaith, a senior member in Dr. Negida's team who could publish +10 papers and +10 abstracts in one year with exceptional success, will tutor the participants on every task using a step-by-step approach to do every task assigned to the teams. Hazem will meet the teams every week on Fridays at 5:00 PM Cairo time to explain "HOW TO DO THE TASK". By the end of the internship, every team will have finalized at least one research project. We also aim to publish it at least as a conference abstract at an international conference (we guarantee at least one conference abstract publication). The team might also choose to publish it as a journal article (optional).

Duration: 4 months (18 weeks)
Cost: USD 50$
Internship Instructor: Hazem S. Ghaith
Projects' supervisor: Dr. Ahmed Negida
Internship Coordinator: Ahmed Hassan Sherif

This is an excellent opportunity to maximize your benefit from Dr. Negida's courses, solidify your knowledge and research skills, and get published.
 

Who can join the research internship program?

If you have previously taken the whole NCRT courses (C1 to C5) at any year in the past
OR at least have taken the C5 course: Systematic Review and Meta-analysis
OR will take the upcoming Winter School of Clinical Research 2022 (see dates below),
then you are eligible to subscribe to the internship program.

Summary of ALL upcoming courses & schedule


(A) Introduction
20 January 2022; C0: Introduction to Clinical Research and NCRT program [FREE]

(B) Winter School of Clinical Research ##For inquiries: ncrt@negida.com
03-06 February; C1: Introduction to Clinical Research Methodology
10-13 February; C2: Introduction to Medical Statistics on SPSS
18-19 February; C3: How to Calculate the Sample Size
25-26 February; C4: How to Write and Publish Research
03-06 March; C5: Systematic Review and Meta-analysis

(C) Post-course Internship/project ##For inquiries: internship.ncrt@negida.com
25 March; Introduction to the internship program
1 April to 29 July; Internship program (research project)

Feel free to share this news and links with your research teams and students on social media. 

Thank you!
NCRT team

How to handle missing data in the intention-to-treat analysis of pragmatic randomized controlled clinical trials

Attrition bias is a systematic error caused by unequal lossร‚ of participants from a randomized controlled trial (RCT). In clinical trials, participants might withdraw due to unsatisfactory treatment efficacy, intolerable adverse events, or even death. Additionally, patients who do not comply with the treatment schedule or who seek additional interventions outside of the study protocol are more likely to be excluded from the study due to violation of the study protocol. These dropouts can influence (1) the statistical power of the study and (2) the balance of confounders between the groups.

(1) The statistical power of a study refers to the ability to detect an effect if one exists. Say you compare two treatments and you find there is no significant difference between them. If you do not have sufficient statistical power, you do not know whether you failed to find a difference because:

a) there is truly no difference between the groups.

or

2) you were just not able to detect the difference.

To have sufficient statistical power, you need to make sure your sample size is large enough. (This should be calculated prior to your study).

(2) Confounders are variables that the researcher failed to control or eliminate that could affect the outcome of the study. For example, if a treatment group and a control group differ in terms of variables such as gender or socioeconomic status, these could be potential confounders. So if, for example, a disproportionate number of women drop out from one group, this could affect the balance of confounders between the study groups. This is because, ideally, we want the study groups to be as similar as possible, differing only in terms of the intervention they receive.

How to overcome attrition bias

To avoid a substantial decrease in study power, it is recommended that investigators enroll more participants than the minimum required sample size. This allows researchers to compensate for expected withdrawals. However, although this step is important, it is not sufficient to totally avoid bias, even if the number of remaining patients (after withdrawals) is enough to give the required statistical power.

This is because, as mentioned, random allocation of participants to the study group ensures balancing of known (and unknown) confounders. This is vital to ensure the validity of randomized controlled trials. Therefore, the withdrawal of a disproportion of participants from one group can impact the distribution of confounding variables among the study groups.

To avoid the bias that can arise from this, it is necessary to include those patients who drop out in the analysis. Accordingly, the intention to treat analysis (ITT) has been introduced as a statistical solution. This is where all randomized patients are included in the final analysis (irrespective of any noncompliance or withdrawal from the study). The challenging step in the ITT analysis is to estimate the end-point values of non-compliant or lost patients because these data usually are not available. However, there are multiple approaches to estimate these data:

(1) Last observation carried forward (LOCF) analysis

In this approach, investigators use the last observation data as an end-point data of the lost patients. For example, imagine a participant was meant to be followed-up at 6 weeks, 10 weeks, and 16 weeks. However, at 16 weeks of follow up they could not be contacted. In this case, the data they gave at 10 weeks is โ€˜carried forwardโ€™ and assumed to be his or her score at 16 weeks. However, this method should be used cautiously, because it assumes that participants gradually improve throughout a study. The problem is that this scenario can give biased results when the underlying disease has a progressive nature, meaning that the disease deteriorates over time.

For example: imagine an RCT about neuroprotection against Parkinsonโ€™s disease. (Neuroprotective strategies are strategies that can be applied early in the disease, with the intention of delaying disease progression). However, when the investigators use the last observation value, it is likely that the lost patients will have values that indicate less disease progression than the actual end-point. That is, whether they receive the active intervention or control, their earlier values are likely to be better. And this couldร‚ lead to an overestimate of the intervention efficacy.

(2) Multiple imputations method

The aim of this approach is to predict the endpoint value of lost patients using regression models. In this approach, imputations are performed through regression models. Then random errors are added to the expected values through a random number generator. Essentially, this approach aims to consolidate the values that have been recorded into a single (estimated) result.

(3) Analysis of the worst-case scenario

In an RCT, because participants are randomly allocated to the treatment and control groups, any systematic differences between them are most likely attributed to the treatment. So one approach is for researchers to assume the worst-case scenario to fill the data for the participants lost to attrition.

If the outcome is dichotomous (e.g. mortality), then we can assume the worst event (i.e. death) for drop-outs from the experimental group and the best event (i.e. survival) for any participants who dropped out of the control group.

If the outcome is continuous, we can assign the best baseline value and the worst endpoint value to the drop-outs.

This approach yields a conservative estimation of the treatment effect. So if the treatment was found superior to the control, we can be confident the treatment really was more effective. This is because, if anything, this type of analysis would underestimate the magnitude of the treatment effect. However, a key problem with this approach is that poor compliance may not necessarily mean the treatment was ineffective. If this analysis shows the treatment is not superior to the control, we cannot be sure why this is. It could be because: 1) the treatment is truly ineffective or 2) it is a consequence of the drop-outs. Therefore, to avoid misinterpretation, it is advisable to analyze RCTs using multiple approaches, including per-protocol analysis and multiple ITT scenarios whenever possible.

Example


Altman and colleagues performed an RCT comparing two treatments for pelvic organ prolapse. This is where there is bulging of one or more of the pelvic organs into the vagina. (The 2 treatments compared were anterior colporrhaphy versus transvaginal mesh).

In this study, they performed:

1) per-protocol analysis (i.e. only including those patients who completed both the treatment and follow-up).

and

2) another analysis scenario (ITT analysis assuming the worst-case scenario).

In the statistical analysis section, they wrote these two sentences:

โ€œanalyses included both a per-protocol analysis and a conservative sensitivity analysis of the binary primary outcome. For purposes of the sensitivity analysis, we assumed a worst-case scenario for the mesh-repair group. (i.e. for all patients with missing data in the mesh-repair group, the study treatment was considered to be unsuccessful, whereas, for patients with missing data in the colporrhaphy group, the study treatment was considered to be successful)โ€.

Additionally, when reporting the primary outcome, the authors mentioned: โ€œThe result of the per-protocol analysis was similar to that of the intention-to-treat analysis (adjusted odds ratio, 4.3; 95% CI, 2.6 to 7.2)โ€. Therefore, we can be confident that the treatment effect reported in this study is not likely to be influenced by the differential loss of some participants during the follow-up.

Conclusion

Differential loss of participants from RCTs results in attrition bias. The ITT analysis is recommended to minimize attrition bias. It is recommended that researchers:

(1) try to obtain, where possible, data about drop-outs from other sources (e.g. death registry).

(2) try to impute the missing data using multiple approaches.

(3) perform multiple types of analyses, including per-protocol analysis and ITT scenarios.

That way, when the different analyses lead to the same conclusions, we can be more confident the conclusions are robust.

How to Develop A Questionnaire? The Methodological and Statistical Considerations

Hello,

One week ago, I posted (here) asking for recommendations for the upcoming Webinar and some of you recommended the topic of questionnaire development. Further, I posted the suggestions to my YouTube channel community, and still, the highest recommendation is the questionnaire development dilemma (voting here). So I decided to discuss it briefly in this blog post.

Questionnaire vs. CRF

The commonest misconception I found in the medical community is that some people consider the case record form (CRF) of ANY clinical research study as a questionnaire. This is NOT true. There is a big difference.

A CRF is a form that includes all demographic, clinical, laboratory, and imaging variables that you will collect from the patient for a clinical study - all variables in one place as a hard copy (called CRF) or electronic format (eCRF). People who work in the operation department of pharmaceutical companies and contract research organizations are more likely to use this term (it is less used in academia).

A questionnaire is a set of questions that are scored as a psychometric measure for psychological variables and other variables that are felt but can not be seen. 

Home-grown questionnaire vs. Standard Questionnaires

My mom is a super talented chef (this is a fact, not a compliment; she never reads these posts anyway), but I still find the Pizza of PizzaHut more delicious than the home-made ones, and there is nothing personal about it. Every time someone speaks to me about their *revolutionary* new home-made questionnaire, I remember this Pizza thing because 
No matter how much time and effort you invest to prepare (or cook) a new home-grown questionnaire, using a standard, reliable, validated questionnaire is still much better (but only if there is one!)

Needless to say that if there is no standard reliable validated questionnaire, then you are doing a good service to your patients and to the research community by try to develop a new one.

1. Developing the set of questions

First, you can not develop a questionnaire alone. A team of physicians and researchers with long-standing expertise in evaluating this condition develop the questions. The team should include a psychometrician.  

2. How to test the construct validity

Construct validity is the extent to which the questionnaire measures the theoretical or the psychological outcome of interest. To establish the construct validity, you need to run the principal component analysis (abbreviated as PCA) and the factor analysis (sometimes called confirmatory ... or CFA). The minimum number required to conduct this analysis is 60 responses. This analysis allows you to investigate how much correlation exists between the different questions, how many psychological factors play a role in the responses of these questions, and whether one question is likely to be unrelated to the outcome of interest. All these advanced calculations are done by statistical software (I briefly explained this here).

3. How to determine the optimal cut off values

If the questionnaire will be used to diagnose a condition (i.e. anxiety), then it is important to plot the sensitivity and 1-specificity to determine the optimal cut-off value at-which the questionnaire performs the best in classifying the participants. The receiver operating characteristics curve (ROC curve) is used for this purpose, it shows the trade-off between sensitivity (or TPR) and specificity (1 โ€“ FPR). Usually, Youden's index is used to determine the optimal cut off value (I briefly explained this here).

4. How to test questionnaire accuracy

Accuracy is another different form of validity. It is a quantitative measure of how the questionnaire is accurate in ruling in and ruling out the condition of interest. This can be done by calculating the overall accuracy, sensitivity, and specificity against the gold-standard evaluation. PPV, NPV, +LR, and -LR is less common and less useful in questionnaires (I discussed this in detail in more than 5 papers on PubMed - look for them if you want).

5. How to determine the questionnaire reliability

There are many levels of reliability that should exist in the questionnaire:

(A) Internal reliability
Questions should have the same direction. This internal reliability is measured by Cronbach's alpha. This internal reliability should be no less than 80% (the higher the better). Further, if the internal reliability is less than the 80% threshold, you can perform sensitivity analysis to detect which questions should be omitted to fix these reliability issues and you can change the questionnaire accordingly (I also mentioned this briefly with an example here).

(B) Test-retest reliability
Reliability here means that every time you ask the same questions you get the same answers irrespective of how much these answers are accurate (the accuracy is a different parameter as I mentioned above). In order to investigate the test-retest reliability, it is advised that a set of pilot responders answer the same questionnaire on two different occasions separated by a two-week period to allow the responders to forget their responses to the initial questionnaire. These data are analyzed similarly to the "inter-rater reliability analysis" of clinical scores. To clarify more, you consider these two sets of responses from the same patients as two different raters, and you calculate the Cohen's Kappa test of the agreement to determine your K value (the higher the better). I'm not aware of a minimum threshold for that and have not read a specific guideline but a questionnaire sounds reliable for me if the K is above 0.6.  

6. Questionnaire Feasibility

A good questionnaire should be socially acceptable, feasible to answer, and easy and simple to understand. For this purpose, adopting a standard questionnaire to your community might requires TRANSLATION (to overcome the language barrier and make it easy to understand) and CULTURE ADAPTATION (to make it socially acceptable and feasible to answer). A major problem here is that if you introduced substantial changes to the questionnaire itself, it might lose its reliability. Therefore, if you plan to adopt a questionnaire and translate it to your own language for the research purpose, you have to do the following:
  1. Translate the questionnaire from the original language to your language
  2. Back-translate the translated questionnaire to the original language
  3. Compare the original to back-translation for inconsistencies
  4. Pilot the final translated questionnaire in a pilot sample of your population to get their feedback on any unclear or difficult questions
  5. Correct and/or modify whenever needed
  6. Pilot the questionnaire again to ensure its reliability
  7. Now, finally, the questionnaire will be scientifically suitable for research in your community

It is also recommended that:

1- First, you publish the processes that you did to make the questionnaire ready for use. For example, if you have developed a new questionnaire from scratch, you should first publish the "development & validation" process as an article. Similarly, if you have translated the questionnaire, you should publish the "translation & reliability assessment" process as a research article before you publish any results you measured by this questionnaire.

2- If the questionnaire, you use, is not valid or reliable, no one can count on or trust your findings. Therefore, the first thing you should do is to validate the tool and declares its validity and reliability to the scientific community, then you can start using it and publish whatever you want by applying it to your population.

3- Data of the pilot phases MUST BE EXCLUDED from the final analysis of your work.

I realize this post is a bit long, but the topic is very important and interesting. I hope this helps. Please, let me know in the comments if you suggest that I write about any other topics of interest. 

Thank you ... AN
Neveen Alaa
Aug 9, 2021
It is very valuable and informative post. May God bless you Dr. Ahmed Negida

If you are a junior researcher, don't fall into the "Quality not Quantity" trap

Hello everyone,

Groundbreaking research requires learning, commitment, hard work, persistence, and cumulative efforts and expertise of several individuals over several months up to decades! I believe that engaging junior researchers in such a lengthy process is a two-sided weapon. 

First of all, let's agree on the definitions. In science, a junior researcher is defined as someone who has been doing research continuously for less than 7 years while a senior researcher (or scientist) is defined as someone who has spent more than 7 years of continuous research work.

At the early stages of the academic pathway (0-4 years), everyone has stopped to ask themselves: 
  • "When will I get published?"
  • "When will I see a meaningful outcome of these efforts?"
  • "I want to make sure this is the right thing for me"
  • "I want to make sure the process will work".

Usually, when you are at this stage, I assume that you have already learned the basic research skills for your field and have acquired sensible *theoretical* knowledge about the research process in your field. 

I always advise junior researchers, at the early stages, to (1) employ, refine, and improve their skills by participating in several small, pilot, initial research projects rather than doing a high-impact, high-quality, mega project, (2) get involved in multiple research projects as long as they can, and (3) accept the risks of doing mistakes and the risks that some projects might eventually be published in modest journals. My advice is always criticized by people who advocate the "Quality NOT Quantity" thing. In this post, I explain philosophically, scientifically, and historically from my experience, why the "Quality NOT Quantity" mindset does more harm than good for early junior researchers (0-4 years). 

Your first research projects are very critical

Your first research projects will give you either the hope and motivation to continue that pathway or a big early failure and disappointment. I believe that at this early stage, it is very important to keep students motivated on the track to continue doing research. The learning curve in scientific research is steep and the outcomes are usually slow and might be delayed for years. The pathway to finish your research projects and get published is not usually as straightforward as you might think. 

The concept of the initial project

A wise decision at this point is to involve students in what I call *the initial project*, *the pilot project*, or *the short gains*. This is a simple and easy pilot project with a straightforward plan and plausible publication chances. This can be a simple narrative review article, a systematic literature review, a case report, or a cross-sectional study. Furthermore, I sometimes, encourage students to submit their work for international conferences. At the moment they receive their first acceptance email, their motivation is reborn, their self-confidence is built, their energy for the work is recharged, and most importantly, the many previously unanswered questions in their subconscious mind, about their capacities and skills, will resolve immediately.

One big high-quality project OR multiple small lower-quality projects?

Clinical research is different from many other fields of scientific research. Clinical research extends far beyond the scientific experiments done in laboratories. Since clinical research deals with patients, it requires clinical knowledge AND expertise in epidemiological research methodologies and medical statistics. Employing this knowledge and skills in field projects and/or systematic reviews of published literature is important to build the capacity of early career researchers in the field of Medicine.

While working on a BIG project seems a wow thing for many people. I always recommend against participating in this kind of work that consumes your effort and kills your other skills and motivation unless it is accompanied by many other parallel works. I advocate that early career researchers benefit more from working on multiple small initial projects that diversify and solidify their skills and also give them the satisfaction and motivation they need at this stage.

Furthermore, working on multiple small projects maximizes the chances of publication by augmenting the independent probabilities of getting this work done and published. The longer the time from your start point to the time of the first publication, the higher the possibility of losing interest and motivation and I know many people who have already quitted the pathway as a result of this wrong strategy.

Conducting a high-impact multicenter project that achieves SMART goals is a good step. But building your self-confidence, building your connections, strengthening your skills in methodology and data analysis, and choosing the correct timing to do this mega project are more important. I personally consider any project that takes more than 6 months as a high-risk project for junior researchers. I recommend against involving students in that kind of thing unless it is a tiny part of many parallel projects and provided that it does not consume them their energy. 

In the following paragraphs, I summarize the key phases of my research journey starting from July 2014 until now.

PHASE 1 (July 2014 to November 2015)

In the first 17 months of my research journey, I did not aim to publish research as much as I aimed to learn every single step in the process, allowing myself to make mistakes and be embarrassed by the *harsh* reviewers' comments and editors' feedback on my work.

PHASE 2 (December 2015 to April 2017)

Once I had published my first co-authored paper, where I wrote 30% of the manuscript and run the data analysis part, I started a new phase in my research journey. During the 1.5 years of this phase, my aim was to publish as many papers as I can. At this stage, I never aimed to publish in a top journal and I never considered the impact factor thing when doing research but I was pragmatically looking at the final outcome "Publishing". Some of our papers would have had better chances in Q1 journals but we chose to submit to the journals with the +50% possibility to accept the papers. I remember one meta-analysis by our team was published in a journal with IF of 1.5, a few months later, a similar meta-analysis on the same clinical trials, the same number of patients, and the same findings were published in a top Q1 journal with an impact factor +5. This phase ended when my mentor Prof. Mohamed Abdel-Daim, Ph.D., faithfully gave my team the best advice at this time. "You should only publish in PubMed- and SCOPUS-indexed journals with a minimum impact factor of +2" - he advised.

PHASE 3 (May 2017 to April 2018)

Within this one-year phase from May 2017 to April 2018, I focused on publishing as many papers as I can but only in PubMed- and SCOPUS- indexed journals with a minimum impact factor of +2. 

Phase 4 (May 2018 till now)

By May 2018, I found myself fully saturated with publications and I felt it is time to become more focused on topics and methodologies that will directly and positively benefit my career pursuits after graduation (Neurological Surgery and Global Neurosurgery). At this phase, I asked my team to immediately remove my name from the authorship of several high-quality manuscripts that deemed too far from my target despite my substantial contributions to this work. Some of these papers are now highly cited and impactful in clinical guidelines, however, I do not regret the decision and the step I took. I started to collaborate with neurosurgeons and get involved in Neurosurgery research. I initiated the Global Neurosurg Research Collaborative, the World Global Neurosurgical Outcomes Collaborative, currently hosted by Oregon Health and Science University in the United States. Since May 2018, these are my current goal in addition to working on the thesis where I critically discuss 17 of my previously-published work in Parkinson's Disease (11 journal articles and 6 conference abstracts) as part of my Doctor of Philosophy (Ph.D.) by publication in biomedical sciences.

Phase 5 (From October 2021 ...)

This phase has not started yet, however, it is a well-planned continuation of the aforementioned progress. This should start in October 2021, I will talk to you later when I get involved in this phase!

My take-home messages for early junior researchers (0-4 years) are as follows:


1- First, you should do research to learn (phase I), then to become motivated to do more (phase II), then to become personally satisfied (phase III), and finally to advance science and improve your field for the remaining of your academic career.

2- Participate in as many research as you can, provided that you are not doing 3 things: (1) Ethical misconducts, (2) Plaigarism, and (3) Publishing in predatory journals.

I hope that this helps! Please, let me know what you think and write your opinion in the comments below. If you are not a member of our community, you can subscribe for FREE to the basic membership level. 

Stay tuned for more of my thoughts and opinions.
Thank you ... Ahmed Negida
Aya Al-Nabahin
Feb 15, 2021
thank you a lot you make me excited to continue my journey , I learn a lot from your advise and session

Top 10 lessons I learnt over the past 7 years in clinical research

Hello,

I'm listing below the top 10 lessons I have learned over my entire 7 years of expertise in the fields of clinical research (2014-2021).

(1) Building cumulative research efforts in one specific field is much more important than making several achievements in scattered unrelated fields.

(2) Although Google Scholar citations might not be a reliable indicator of the impact and quality of your research output, these metrics are still considered by several governmental agencies as reliable indicators of your influence in the field.

(3) You do not need to publish in a high impact journal to get several citations; your work can make an impact itself irrespective of the journal where it was published; I published a letter to the editor in a new journal (in 2015) and it was cited +200 times until now (currently, ranks among the top 1% cited articles in clinical medicine for the respective year of publication).

(4) Do not fall for the quality vs. quantity trap. In the first 5 years of your research career, quantity matters more than quality (as long as you are away from predatory journals, plagiarism, or ethical misconducts); publish as much as you can in your specific field (see number 1 again).

(5) Communicate and collaborate. Work in multiple groups as long as possible. The parallel work maximizes your chances of publication and increases your chances of building new connections.

(6) Work in small teams to learn more and in big teams to get closer to academic opportunities.

(7) Do not compare yourself to others. Do not create enemies. Our enemy is the disease.

(8) Conferences are mainly for scientific communication, building bridges for collaboration, and learning about new updates in the field. I do not consider conference presentations as a successful publication unless published later in a journal.

(9) Be patient. Your first publication might take about 6-24 months.

(10) This is the most important lesson. Your mentor = Your success. Good mentors are difficult to find. Find good mentors and follow them. I do not usually speak about my mentors but they played the most important roles in guiding me forward. I always feel privileged that I had met them and still learning from them until now.

Thank you very much!

I will discuss these points in the upcoming few weeks on my youtube channel available here: https://www.youtube.com/channel/UCfMrL5_2TidACpbvgF9sPDA

yasser abdelkareem
Feb 2, 2021
Seriously, I get many benefits from your advice. many thanks. You are a light candle at a route for science.