Skip to main content
Data & Figures

Research Figures Take Longer Than the Experiments: The Evidence and the Fix

Evidence from researcher communities shows figure preparation routinely consumes more time than any other part of manuscript preparation. Here is the documented evidence for each failure mode and the free tools that eliminate them.

11 min readScholarBits

Figure preparation is, by most researcher accounts, the least rewarding part of manuscript work. The Washington University School of Medicine's author checklist describes figures as consuming "more time than any other portion of manuscript preparation."1 The evidence from researcher communities confirms this: DPI rejections, colorblind accessibility flags, LaTeX table formatting nightmares, and scale bar errors are a consistent source of manuscript revisions that have nothing to do with the science.

A 2024 survey thread on r/PhD asked what the most annoying part of research was. Figure preparation and reformatting consistently ranked alongside reviewer response and citation management as the top three sources of time loss.2 This is not a minor workflow irritant — it is a structural problem with how scientific figures are produced and checked.


Why Figures Fail at Submission

The failures cluster around three categories. First, technical specification violations: journals require minimum 300 DPI, specific file formats (TIFF with LZW compression, EPS, or PDF), and exact colour space settings (RGB for online, CMYK for print). Second, accessibility failures: figures that are unreadable in greyscale or to colour-blind readers are increasingly grounds for revision requests at journals with accessibility policies. Third, data integrity failures: discrepancies between numbers stated in the main text and numbers shown in figures or tables are a top reason for post-acceptance corrections.

Each of these is preventable with a pre-submission check. Most researchers skip the check because it takes time they don't have. The solution is tools that make the check instantaneous.


Tool 1: One-Click DPI Verifier

The problem

The "at least 300 DPI" requirement for print figures is universal across academic journals and is one of the most common reasons for submission returns.3 The problem is not that researchers don't know the requirement — it is that checking DPI requires navigating nested OS menus. On macOS: right-click → Get Info → More Info → scroll to find resolution. On Windows: right-click → Properties → Details tab → scroll to find the resolution fields.4

This is four clicks and significant scrolling, multiplied by every figure in the manuscript. For a paper with twelve figures, this is a 10–15 minute audit of pure mechanical checking. Researchers describe finding out at submission — after uploading — that several figures failed the check, requiring the entire upload to be restarted.5

Adobe's own documentation on DPI acknowledges that "many image editors don't make this obvious."5

How it works

The One-Click DPI Verifier accepts a drag-and-drop image upload and instantly reports the horizontal DPI, vertical DPI, colour space, file format, and whether the image meets common journal thresholds (300 DPI for halftones, 600 DPI for line art, 1200 DPI for combination figures). The result is visible in under a second.3

Try it: DPI Verifier on ScholarBits


Tool 2: CSV-to-LaTeX Matrix Bot

The problem

For mathematical, physical, and engineering sciences, entering tabular data or matrices in LaTeX requires manually placing ampersand column separators and backslash-backslash row terminators throughout a structured environment. A 6×4 data table requires 18 & separators and 6 \\ terminators, all placed in exact positions where a single error causes compilation failure.

A highly upvoted thread on r/math describes this as the defining frustration of long-term LaTeX use: "meticulous placing ampersand separators" for a matrix that already exists in a notebook or spreadsheet constitutes "as much rigid rule-following as a programming language" with a much less helpful compiler.6 The Overleaf LaTeX tools page documents this as one of the most common sources of user support requests.7

The work is duplicated — the data exists in one place and must be re-entered in another. This is the definition of "keyboard-level micro-frustration."8

How it works

The CSV-to-LaTeX Matrix Bot accepts a pasted spreadsheet selection (tab-separated or CSV) and generates a complete, compilable LaTeX table environment: correct column spec, ampersand separators, row terminators, and optionally a \toprule/\midrule/\bottomrule booktabs-style structure. For matrices, it generates the appropriate bmatrix, pmatrix, or vmatrix environment. The output pastes directly into Overleaf.7

Try it: CSV-to-LaTeX Matrix Bot on ScholarBits


Tool 3: Accessibility Palette Generator

The problem

Colour-blind readers make up approximately 8% of the male population and 0.5% of the female population. A figure that relies on red-green colour differentiation to convey data is inaccessible to a significant portion of potential readers — and increasingly, journals enforce this. Nature's figure preparation guidelines explicitly require that figures "should be readable by people with colour blindness."9

Beyond colour blindness, many journals are still printed (or printed on demand), and figures that rely on colour differentiation without shape or texture redundancy become indistinguishable in greyscale.

The problem for researchers is not awareness — it is implementation. Converting a default R or Python colour scheme to a colour-blind-safe palette requires looking up hex codes, testing them, and reformatting the visualisation code. This is an interruption to the analysis workflow.[^152]

How it works

The Accessibility Palette Generator provides vetted colour-blind-safe palettes (Viridis, Okabe-Ito, ColorBrewer qualitative) with hex codes formatted for direct use in R (ggplot2), Python (matplotlib/seaborn), and Prism. Each palette is displayed as both a full-colour preview and a simulated greyscale/deuteranopia view, so the researcher can verify accessibility before using it.10

Try it: Accessibility Palette Generator on ScholarBits


Tool 4: SVG-to-TIFF Exporter

The problem

Most researchers produce figures using vector-based tools: Inkscape, Adobe Illustrator, R's ggplot2 (which exports SVG), Python's matplotlib (SVG export), or presentation software. Vector formats are ideal for editing — they scale without quality loss. But the majority of journals do not accept SVG files and require TIFF or EPS with specific resolution and compression settings.11

Converting SVG to a 300 DPI TIFF with LZW compression currently requires: opening the SVG in Inkscape or Photoshop, changing the document resolution, exporting as TIFF, and specifying the compression. Researchers without access to Photoshop have even fewer options. The process is documented but not trivial, and it must be done for every figure.11

How it works

The SVG-to-TIFF Exporter accepts SVG input (by upload or paste) and produces a TIFF output at a specified DPI — defaulting to 300 — with LZW compression. It also supports EPS output for journals that prefer it, and PDF for journals accepting vector PDFs. The conversion preserves font embedding and vector quality at the target resolution.3

Try it: SVG-to-TIFF Exporter on ScholarBits


Tool 5: Text-to-Table Auditor

The problem

Inconsistencies between numbers stated in the manuscript text and numbers shown in tables or figures are listed as a primary reason for post-acceptance corrections and are flagged by reviewers who "check numbers carefully."12 A paper might state "n=47" in the Methods while the demographics table shows 46 participants, or report "p=0.034" in the results text while the corresponding table shows "p=0.043."

These errors are not fabrications — they are copy-paste and rounding inconsistencies that accumulate during revision. But they undermine confidence in the manuscript's accuracy. A 2024 manuscript checklist from Paperpal lists text-table consistency as one of the ten most important pre-submission checks, specifically noting it as "a top reason for rejection."13

How it works

The Text-to-Table Auditor scans a manuscript for numerical values and cross-references them across the full text, tables, and figure captions. Where the same statistic appears in multiple places, it checks for consistency. Discrepancies are flagged with location references (e.g., "p value in Results paragraph 3 differs from Table 2") so the researcher can make a deliberate decision about which value is correct.12

Try it: Text-to-Table Auditor on ScholarBits


Tool 6: Grayscale Previewer

The problem

Many journals still print in black and white, and even online readers may print papers on greyscale printers. A figure that uses red and green lines to distinguish two experimental conditions becomes two indistinguishable grey lines in print — and the reviewer sees the greyscale version.14

Peer reviewers specifically cite figures that "lose information in black and white" as a frustration: "I've had papers where I couldn't evaluate the figures properly because all the lines looked the same in the print copy I was reviewing."15 This is a preventable problem.

How it works

The Grayscale Previewer applies a greyscale simulation filter to an uploaded figure, showing exactly how it will appear in black-and-white print. It also provides a deuteranopia simulation (the most common form of colour blindness). Researchers can iterate on their colour choices before exporting the final figure, avoiding the "unwanted surprise" of a reviewer pointing out inaccessibility after submission.14

Try it: Grayscale Previewer on ScholarBits


Tool 7: Scale Bar Automator

The problem

Microscopy and geology figures require accurate scale bars. Getting the scale bar right requires knowing the objective lens magnification, the pixel size of the camera sensor, and the image resolution — and combining these into a scale bar with the correct physical length. Researchers describe the current process as "an old school art class task" done by placing a line element in PowerPoint or Illustrator and manually calculating its length.16

The consequence of getting it wrong — a scale bar that implies a different magnification than the image was taken at — is not a minor error. It renders the figure scientifically incorrect.

How it works

The Scale Bar Automator takes magnification, pixel size (from the microscope or camera spec sheet), and desired scale bar length as inputs, and calculates the pixel width needed for the bar to represent the specified physical length. It then generates a correctly sized scale bar as an overlay image that can be composited onto the figure, with label text generated automatically.16

Try it: Scale Bar Automator on ScholarBits


Tool 8: Graphical Abstract Bot

The problem

Graphical abstracts — a single-panel visual summary of the paper's key finding — are now required or strongly recommended by many major journals including Elsevier, Cell Press, and PLOS ONE.10 They are not optional: papers submitted without them may be returned before peer review.

But most researchers have not been trained in science communication design. The evidence from r/AskAcademia is consistent: researchers "spend waaaay too long making" graphical abstracts, often because they are iterating on a blank canvas without templates or domain-specific visual conventions.17

The standard tools for this — PowerPoint, Adobe Illustrator, BioRender — either produce generic results or require significant design skill and time.

How it works

The Graphical Abstract Bot provides academic-specific figure templates organised by study type (clinical trial, in vitro, computational model, survey, systematic review) and generates a layout suggestion based on the study's key finding. The researcher provides the core outcome and the tool suggests a visual arrangement — arrows, icons, before/after panels — that can be refined and exported.10

Try it: Graphical Abstract Bot on ScholarBits


The Common Thread

Every tool in this category addresses the same underlying problem: the gap between the data that researchers produce during their research and the presentation format that journals require at submission. This gap is not trivial — it is where a significant portion of manuscript preparation time disappears.

The research on clerical drain makes a precise point: tasks that are "highly technical but purely mechanical" are the ideal candidates for automation.8 DPI checking is technical (it requires knowing what 300 DPI means) but purely mechanical (the calculation is trivial). LaTeX table formatting is technical (it requires knowing BibTeX syntax) but purely mechanical (the data already exists). The tools above remove the mechanical component, leaving the technical judgement — which figure best communicates the finding, which colour palette best serves the data — where it belongs: with the researcher.


References

Footnotes

  1. Author's Checklist for Preparation of Publications — WashU Research

  2. What is the most annoying part about your research? — r/PhD

  3. How To Check And Change The DPI Of An Image — Thomas Group Printing 2 3

  4. accessed March 14, 2026 — wikiHow

  5. DPI Meaning | What is DPI & How to Check/Change it — Adobe 2

  6. After 10+ years of working with it, I'm starting to strongly dislike LaTeX — r/math

  7. LaTeX tools — Overleaf 2

  8. Tiny Tools: A Framework for Human-Centered Technology — generative-ai-newsroom.com 2

  9. Submitting a Research Paper? Critical Submission Readiness Checks — researcher.life

  10. Free Printable Science Worksheets | AI Diagram Generator for Teachers — ConceptViz 2 3

  11. How to Check Image DPI — Murphy Print 2

  12. A Researcher's Checklist for Journal Submission Preparation — Proof-Reading-Service.com 2

  13. 10-Point Manuscript Checklist — Paperpal

  14. Pre-Submission Checklist: 5 Key Steps — MDPI Blog 2

  15. The Peer Review Process: An Inside Look — Turacoz

  16. The Magical Role of Duke's Tiny Tools — Duke Today 2

  17. Most annoying part of submitting journal manuscripts — r/AskAcademia

Get new tools in your inbox

Research workflow tips and new ScholarBits tools — no spam, unsubscribe any time.