21 Matching Annotations
  1. Last 7 days
    1. To counteract this, we retroreflected the illumination line as shown in Figure 1C & D, such that the flowing sample was illuminated (pushed) from both sides.

      Instead of adding the second beam to counteract the force of the light sheet, is it possible to add a 3D sheath flow that pushes cells toward the bottom of the channel? This paper explains this concept https://doi.org/10.1063/5.0033291

  2. Apr 2025
    1. We recently introduced a new method of vat-based 3D printing that we refer to as InjectionContinuous Liquid Interface Production (iCLIP).

      iCLIP looks like a really promising tech for making 3D printed microfluidics, it was great to learn more about it through your paper. I wonder how long you could have resin 3D printed structures like these in your skin without negative side effects? I could imagine devices like these being built into wearables that can inject drugs continuously over longer periods of time

  3. Mar 2025
    1. Looking ahead, future efforts will focus on developing lower-cost hardware solutions to further democratize access to microfabrication techniques. Digital light processing (DLP) printers contain much of the necessary hardware (UV lamp, driver board, DMD)

      I'm really looking forward to seeing how this goes and would be interested in building one myself!

    2. The use of 3D printing resin as a substitute for SU-8 eliminates time-sensitive baking steps and reduces the need for extensive glass slide cleaning.

      I think its great to be able to use 3D printer resin and still get high quality structures. I'd be curious to see a comparison of devices made with SU-8 vs 3D printer resin using this maskless approach, are there still benefits to SU-8 that make investing in the proper safety equipment worth it? It would also be interesting to see if there is still any benefit to using silicon over glass

  4. Feb 2025
    1. Clear

      Might be worth trying a darker opaque resin, this paper showed a reduction in surface roughness when using phrozen High Temp 3D Printer Resin [TR250LV]. https://doi.org/10.1038/s41378-023-00607-y

    2. However it remains low enough to guarantee optical transparencyfor cell observation by confocal microscopy, and our molds exhibit similar roughness to what waspreviously shown for similar mold printing techniques [23].preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission.The copyright holder for thisthis version posted January 29, 2025.;https://doi.org/10.1101/2025.01.29.632980doi:bioRxiv preprint

      I'm curious at what orientation you printed these molds? This paper showed a reduction in surface roughness when tilting multiple axis of the model. https://doi.org/10.1038/s41378-023-00607-y . Not sure if this same technique will translate between DLP and SLA though

    3. Thickness measurement of the PDMS layer under the BOTTOM channeldepending on molding techniques

      I think it would be helpful to label Figure 3E with the molding techniques to get a better sense of what is actually happening to the PDMS during the curing step

    4. A) 3D reconstitution of the printed patterns imaged witha surfaced microscope. B) close up photo of one of the printed patterns for a TOP mold.

      I think the description for A and B in Figure 2 were swapped

    5. salinized

      should be 'silanized' with an 'i'

  5. Dec 2024
    1. Additionally, Reynolds number (Re) estimations suggested that the maximum Re inside the chamber was less than 1, corresponding to laminar flow

      It might be helpful to walk through what dimensions went into this Reynolds number estimation.

    2. Fully cured PDMS parts were separated from the molds using a craft knife.

      What were the conditions for the PDMS curing?

    3. Supplemental Figure 1 provides images of standard USAF resolution targets for the objectives and configurations used in this investigation.

      It's difficult to understand what exactly the distance vs gray scale chart is referring to underneath each of the USAF targets. A short explanation here would be helpful.

  6. Nov 2024
    1. The X,Y-resolution (effective pixel resolution) was similar between the two printers, at 30 and 40 μm for the 405- and 385-nm printers, respectively. In this resin, the crosslinking reaction was more efficient with the 385 nm light source, enabling reduced light dosage (shorter times and lower intensities; Table S1), which assisted in diminishing bleed-through light allowing uncured resin to drain more easily from channels. Consistent with this, channels were printable at sizes ~0.2 mm smaller with the 385 nm printer (Fig. 2C) versus the 405 nm source (Fig. 2B).

      Thank you for including this information, I have tried similar to make similar microfluidic channels of different dimensions, but did not consider the differences that light source wavelength would have on different resin types.

  7. Sep 2024
    1. MER2-1220-32U3M, Daheng Imaging

      It was great reading about an open-sourced imaging platform being used as a low cost method of disease detection. I'm curious how this system works with other cameras and what trade-offs there are between this camera and others you may have bought. I've been using FLIR cameras for open-source affordable imaging projects, but I am not sure that is the best option.

    1. The results prove that 1-minute mechanical maceration using such a small handheld device can perform almost equivalently in sample lysis to manual grinding using a mortar and pestle.

      Really great reading about this device! Has anyone tried using it for nucleus extractions of algae? We have used a modified version of this protocol where we cryo-grind Chlamydomonas with a mortar and pestle, but found it to be laborious and inconsistent. I'd imagine you'd need to make significant modifications to cool the system, though that might be beneficial due to the heat that is generated from the motor

  8. Aug 2024
    1. The low-cost imaging platforms presented here provide an opportunity for labs to introduce phenotyping equipment into their research toolkit, and thus increase the efficiency, reproducibility, and thoroughness of their measurements.

      I really like this approach to plant imaging, I'm curious what you think about a system like this that allows the camera to move between positions to take time lapse images of even more samples?

    2. The low-cost imaging platforms presented here provide an opportunity for labs to introduce phenotyping equipment into their research toolkit, and thus increase the efficiency, reproducibility, and thoroughness of their measurements.

      I really like this approach to plant imaging, I'm curious what you think about a system like this that allows the camera to move between positions to take time lapse images of even more samples?

  9. Feb 2024
    1. Schematic of experimental approach: New Zealand white rabbits are implanted with a chronic32-channel ECoG grid over visual cortex, and visually-evoked potentials are recorded as highcontrast stimuli are presented using a monitor.

      I've really enjoyed reading this paper, learned a lot about optigenetics and ocular implanted devices. I'm curious how you plan to validate the effectiveness of these devices in animals and what kinds of assays would make the most sense.

  10. Jan 2024
    1. laboratories.

      I really enjoyed reading about this project and am curious about implementing something like this for our lab! It would be great to see a video of it in action. It would also be great to know how quickly COPICK can pick colonies from a plate vs a person in the lab?

    2. At present time (early 2023), there is still a lack of open-source tools on the web to label and create datasets of images using a panoptic segmentation format in a straightforward fashion.

      Has this changed in the last year? Would love to know where advancements are being made

    3. using a reflex camera with a macro objective (Nikon D60 with AF-S Micro NIKKOR 60 mm f/2.8G ED lens)

      I'm curious if there are any drawbacks to training the model using a different camera than the one that is implemented on the OT-2?