Saturday, April 5, 2025

TIMSTOF Ultra2 is up and running - first impressions!

 


I'm only up to 300 LCMS injections but I'm at least at a point where I have first impressions of the instrument. This is a new TIMSTOF Ultra2 and refurb/demo EvoSep One System. 5 years ago (!!!) I had impressions of the Flex, my first ever Bruker instrument. A lot has happened in 5 years, and these things have continued to mature.

First off - the source is so much better. I won't mince words (is that a thing? mix?) - I hate the classive CaptiveSpray source on the TIMSTOFs. This was the source I had on the Flex and on the TIMSTOF SCP. It's fragile, has too many pieces and is too tricky for people to put together correctly repeatedly. Since this post is mostly positive I will share something negative and funny. The old CaptiveSpray source is such a pain in the butt to put together that - not even kidding - an extremely capable Bruker field apps person once left my lab with one put together wrong. That's a pretty good sign you need a new source, right? 

The Ultra2 has a completely different source and it's a night and day difference. It isn't simple, you remove the captivespray and brackets separately, but unless you try to do those things backward there appears to be little chance of doing it wrong. The pieces are largely steel (with the obvious exception of the silly glass capillary and gold o-rings) but you don't need a checklist and the hands of a surgeon to put it together. Huge upgrade. Is an EasySpray source easier? Absolutely! But this is a tremendous upgrade. 

Huge surprise for me - the PASER and HyStar integration have come a long long way. It's now called BPS, which I can't even guess what it stands for, but if you also have a PASER sitting in a corner that you stopped using when you started running DIA you might want to revisit it. 

BPS not only runs Bruker's version of DIA-NN but it also can integrate TIMSScoring and - get this - it'll run SpectroNaut! You can still run stuff from the command line through PASER so this is how mine is set up now - 


It's currently triggering stand-alone DIA-NN 1.8 to run on the data acquisition PC and it is running SpectroNaut (let's call it SpectroNaut Lite) on the PASER PC. I'll move it up to a new version of DIA-NN this week. Running the GPU version of 2.definitely results in more IDs, but 1.8 and SpectroNaut seem to be in better agreement (again - very limited run time). 

Worth noting - SpectroNaut lite will make you a .SNE file AND you can go back and merge .SNE files into reports - but if you want to look at your data (which is the whole reason I pay for SpectroNaut) you need to pay for SpectroNaut. I got a fantastic discount on full SpectroNaut for being a BPS user, which makes it the least I've ever paid for a SpectroNaut annual key. I'll prioritize paying for that before the tariffs kick in. 

How are the results? Unreal. I had an SCP but it was one of the very first commercial models. I don't know if the reason this is a bigger jump in my hands than the Flex to SCP is because my SCP was an early model - or if this is just that big of a jump. 

I wonder if the jump from the Fusion 1 to Fusion 3 is a good analogy? Or the jump from a QE to QE HF? Probably. That's not to say the SCP didn't get great data. I'll eventually have a couple papers out from that system. But if you needed my Ultra2 right now and offered me two SCPs right now that for some reason couldn't be upgraded to this one - I wouldn't take it. Here is what is running off the system this weekend (again - only 300 injections). 

These are in reverse order. Definitely ignore the sample names. There is no way I had a retired chromatographer help me come up with a column compatible with the 80 SPD  that has 90% more theoretical plates than the one that is recommended by the manufacturer. I'm not going ot get in trouble because separating peptides on a 5cm column is silly. 


These are in reverse order, so 19 was my first 2ng injection on my way out the door on Friday. 31000 peptides from 2ng is nuts. But it seems to stabilize at 40,000 peptides and 5k protein groups. 200pg seems to hang out around exactly half of that. 

Samples 9 and 10 were where I ran out of 200pg peptide sample. That's what I made the most of and I have trouble estimating from the volume in a 1.5 mL tube how much I have left. So my blanks are blank.

I'm so so so so pumped with this system so far. 

Another big huge jump for the system is that I can tune it from TIMSControl - I don't have to switch to the antiquated OTOFControl to tune my resolution and TOF sensitivity then reboot my PC 11 times to get it to recognize TIMSControl again. I can do the tuning directly in TIMSControl. 

Where are the problems? Compass Data Analysis. For Thermo users imagine that you had a 16-bit version of Xcalibur that works just great for a TSQ Quantum or LCQ and seemed to struggle with LTQ but you're opening Astral data with it. Are the functions still there? Sure! Can you find them? Maybe! Can it do it fast? Ummmm.....no...not fast. And then imagine that if you wanted to extract an XIC on another PC then you had to buy another license of it. Is it that big of a deal? No, but it only feels right to complain about something while gushing about the single biggest purchase of my career. 

Until people join the lab (this summer, I think!) I'm not going to play with it much. I'm prepping single cells almost entirely label free (largely using the One-Tip method by printing cells from a Tecan Uno into EvoTips - though I'm having some problems with some cells that I'm working through). My plan is to spend 1 day every 2 weeks sorting/prepping single cells. Then they run on the EvoSep through until the next week. I can easily do 600 cells and by 40SPD that's 2 weeks of run time. Then I can spend my time writing grants, papers, meeting faculty, writing hiring exemptions -and other PI stuff. When the first people join the lab we'll have Bruker out to train them and I'll ...ummm.... pass off the single biggest purchase of my career...off to those super smart and capable young people... mostly. I have a some dumb ideas and I should probably do them myself! 

Friday, April 4, 2025

SimpliFi is live now! (Commercial cloud software) for data interpretation and sharing!

 


Disclaimer: I've been a long time alpha/beta tester for this commercial product. That ultimately means that I've been using this Cloud based tool kit for free for years and occasionally providing useful (?) feedback to the developers. 

They've never once asked me to blog about it but I'm about to stop being a freeloader and buy an annual license and some credit hours. Also, it might have been live for a while and I only just discovered that 1) it was and 2) That $800/year and $0.20/credit is something I can afford (academic pricing?) For most of my stuff the $200/1000 credit hours goes a long way. 

$800 puts it just about the same price for a big group negotiated bundle deal is for Ingenuity. I've paid less each year for Ingenuity, but then I've only been able to log on super early in the morning because we had limited licenses. I like this so so so much better than Ingenuity. 

SimpliFi, however, is designed for proteomics (and metabolomics and can do transcriptomics) but I've only ever pushed the one button. 

Why I like this? It's smart and simple. You just load your CSV or TSV or Excel or whatever into it and then it can generally recognize exactly what you're looking at. It says stuff like "this is your accession column and I think these are your quantification columns" if it is wrong or you want to ignore a sample, you just un-highlight them. 

In my opinion, the figures are also publication ready 


And biology is easy to get to (in some cases, of course) - in this one my drug definitely screws with the nucleus (found that out myself, but it's cool that SimpliFi would have found it had I initially used it) 


Also! I can load data into this and make a link and send it to collaborators and they can just dig through their own data themselves! That part doesn't use credits - only the data normalization, clustering, that sort of stuff, and if you're doing small n experments it doesn't cost a lot of them.

Why you might not like it? 

When it is detecting batch effects - I don't know how - there isn't a paper (yet?) When it is normalizing your input data - I don't know how it is doing it. When it is looking for run order effects (like your signal dropping over time?) I don't know how. If you don't like black boxes. This isn't for you. 

It also might cost a million dollars/year for industry, I don't know. 

www.simplifi.protifi.com

Thursday, April 3, 2025

Deeper spatial proteomics with MALDI and collagenase digestion!

 


MALDI mass spectrometry is beautiful and can have really impressive spatial resolution these days, but a single spectrum can only look at so many things at once. Even if you had an amazing ion capacity and dynamic range, once you divide that by thousands of tryptic peptides (and matrix) ions that are around you're not going to see much past the absolute highest intensity stuff. 

I think the very best we ever saw from a single MALDI shot in Namandje's lab was 100 peptides(?) and I think reasonable FDR wouldn't have been so kind. That was also a very large sampling size with FTMS readout (high resolution and high capacity but low dynamic range - sort of averages out). 

What if you could simplify your proteomic matrix so there was just less peptides hanging around? We've seen some interesting stuff recently for single cell loads where bigger peptides are better. Sounds like MALDI is something that could also benefit. 

What about collagenases proteomics? 

What? 

Yes, collagenase. The stuff you use to rapidly extract DNA from tissue for quick genotype tests? Yup! 

Unlike our friend trypsin that cuts at K and R and makes nice medium sized peptides, this protein is a lot pickier. It cuts at G-P-X domains - and while I'm very unclear on whether this would be helpful outside of regions where there is lots and lots of collagen - this study focused on the tricky proteomics of Extra Cellular Matrix ,or ECM (which appears to be lots and lots and lots of collagen).

Cool - so how on earth do you analyze peptides produced from this weird enzyme off of a MALDI spectrum? You can make a ridiculous number of guesses - or - you can do LCMS and use a lot of standard proteomic tools to understand the peptides and move backwards! 

This study was a lot of work, btw.... LCMS is used to understand the peptide sequences including where and how they charge and their ion mobility - then the LCMS is used to inform the peptide picking from the MALDI. 

End result? They analyze some patient FFPE tissues at 20um resolution and come back with hundreds of peptides identified by MALDI matching. When compared to trypsin collagenase helps them identify nearly 2x the peptides in the ECM and digesting directly off of tissue slices for LCMS is way more relevant than in solution digestion. There is a lot of biology here that they seem excited about that is outside of my wheelhouse, but there is some neat stuff here because the collagenase peptides often +1 charge in ESI and MALDI so they're straight-forward matches. 

Ultimately, sometimes MALDI papers seems like pretty pictures and not a whole lot else, but this is not one of those. This looks like a really innovative way to get completely new insights from those FFPE blocks. 


Tuesday, April 1, 2025

It looks like Lab Developed Tests for diagnostics are back on the table in the US?!?

 


For an old post on what a Lab Developed Test (LDT) is vs an In Vitro Diagnostic (IVD) you can go here

And...in what was an altogether extremely bad day for the FDA with thousands of people finding out when they tried to use their badges and they simply didn't work - they also lost a federal appeal in this recent ruling.

The American Clinical Lab Association sued to overturn FDA's new rules for moving all new (?) or all (?) diagnostics to IVD designation. So....for those of us who put Aim 4 of our grants things like "and then we'll move this to an FDA approved medical device LCMS system..." we don't have to come up with something more clever to write for how to translate our findings. 

To the thousands of HHS/NIH/FDA employees who just found out they were cut by a heartless and misinformed administration, I'm sorry and I hope you can continue making the world a better place in a better role for yourselves. 

Monday, March 31, 2025

Two people I know are in ScienceNews talking about AI in Proteomics!

 

Yeah! Proteomics is bigtime! 

Check out this 1) Instanovo is finally out and 2) 2 people I know were interviewed about it and they seem to think it's smart! 

Instanovo final paper here


omicsGMF - Why I'm going to have to install R Studio on my new laptop...

 


You know - I was really pumped when my kid's puppy was like "HEY HEY HEY GUY I HATE, WAKE UP OR I'M ABSOLUTELY GOING TO TAKE A DUMP IN YOUR APARTMENT!!" 

So we walked in the pouring rain without either of our coats until she found the absolute perfect place to poop about 3 blocks from our apartment at 4am. 

And then my Inbox was like - "HEY HEY HEY GUESS WHAT! YOU'RE GOING TO HAVE TO TOTALLY INSTALL R STUDIO ON YOUR PERFECTLY OKAY NEW LAPTOP!" 

And here is why I have to stop putting it off....


I don't have an accessible (free) solution for large scale batch effect correction right now. Do you? I guess I can MSStats it, but a long time ago I realized I'm probably just not smart enough to use it. omicsGMF does require me to fire up R which I do have an official certificate saying that I took a bunch of classes in. But it looks to me like I don't have to think about it after I do. It looks like it is smart enough to apply the corrections if I just get everything formatted the right way. 

Now - the upside is that as long as I don't try to rollerblade to work today (Pittsburgh pavement is tough to predict when it's wet) my day absolutely has to get better! 

https://github.com/statOmics/GMFProteomicsPaper

Sunday, March 30, 2025

Selected knockouts of the HLA / MHC presentation system!

 


Forwarding this one in just a second for sure! 


Most of what I know about the HLA/MHC immunopeptidomics / neoantigen presentation system comes from this amazing old paper in Cell Immunity. 

The mechanisms for processing and presentation are largely inferred from the large number of high confidence peptides they identify - but again - this is inference.

If we really wanted to understand how this system worked could you just knock out each protein along the way and do proteomics and immunopeptidomics? I mean, that's what a lot of people would tell you to do and that's what this cool new MCP paper does. 

They knock out 11 proteins in the system and there are 3 or 4 that cause big systematic changes in what peptides are expressed on the cell surface. The ramifications are probably beyond me and absolutely beyond what time I've got to think about it today but some collaborators are going to totally dig all of this new insight! 

Saturday, March 29, 2025

MASSIVE.UCSD links are all updating! Here's how you find your data!

 


I was freaking out just a little late last night in lab. You ever get those 100GB warnings from MASSIVE and then the time limit goes by and you're like "false alarm, nothing changed?" 

If you're over your FairUse limits with Thermo RAW files those files will disappear.

With Bruker .d file format you have embedded folders. Those folders stay, they just get emptied out. For real - it'll look like you still uploaded like 800 .d files and they're still there - but they...aren't....

As I was trying to figure out what I still had backups of -and what I didn't - none of my FTP links that worked before ABRF seemed to work now. 

Right now you just need to add the -ftp into every address.

For example - Proteome Discoverer files I need to answer a reviewer's question were

ftp://massive.ucsd.edu/v06/MSV000093434/

And the webportal version thinks that's what they still are - 


But they're actually at 

ftp://massive-ftp.ucsd.edu/v06/MSV000093434/search/

That's probably all stuff that will be fixed at the end of the migration. You might need to update some reviewers if you told them they could find the stuff at the former, but it's now at the latter. 

Friday, March 28, 2025

Another...unbelievable.... single cell proteomics study - in situ - even....

 


Would you believe you could sample single cells directly off a plate and lyse/digest and transfer that cell with virtually no losses? 

No? Sounds sort of unbelievable and there is some "extraordinary claims require some amazing evidence" or something.  

Would you believe over 3,000 proteins per single cell with a TIMSTOF Pro 1 system running a 21 minute active gradient on those same cells? 

No? Would that immediatley make you think -wow - that should have ultra super fucking amazingly extraordinary goddamned evidence cause I don't think I've ever gotten more than 500 proteins on a very similar system regardless of how long the gradient has been -even on a cell line I like a lot that has a higher protein content than either of those? 

Would you be really annoyed that those instrument files weren't provided? 

What if when you follow a bunch of links to the Electronic Supplemental Information where you were promised access to the data you instead find an Excel spreadsheet with 20 columns  (10 cells each?) and 2150 rows for protein values and not one single missing value? 


In SINGLE CELLS PROCESSED IN SPECTRONAUT? You can't get 0 missing values if you had 2 ug of each of these injections, yo.

Would you start to wonder if there is actually something of a link between some groups and their ...unbelievable...advances in a new field and the fact they only seem to publish in places where providing your data isn't required? Maybe I'm just a jerk, but this study seemed so cool until about page 3. 

I'm not linking the study, but if you see it, I'd suggest you crank the skepticism up to 11 or so.