Reducing paint drying time is an important step in improving production efficiency and reducing costs. The authors hypothesized that decreased humidity would lead to faster drying, ultraviolet (UV) light exposure would not affect the paint colors differently, white light exposure would allow for longer wavelength colors to dry at a faster rate than shorter wavelength colors, and substrates with higher roughness would dry slower. Experiments showed that trials under high humidity dried slightly faster than trials under low humidity, contrary to the hypothesis. Overall, the paint drying process is very much dependent on its surrounding environment, and optimizing the drying process requires a thorough understanding of the environmental factors and their interactive effects with the paint constituents.
With advancements in machine learning a large data scale, high throughput virtual screening has become a more attractive method for screening drug candidates. This study compared the accuracy of molecular descriptors from two cheminformatics Mordred and PaDEL, software libraries, in characterizing the chemo-structural composition of 53 compounds from the non-nucleoside reverse transcriptase inhibitors (NNRTI) class. The classification model built with the filtered set of descriptors from Mordred was superior to the model using PaDEL descriptors. This approach can accelerate the identification of hit compounds and improve the efficiency of the drug discovery pipeline.
Dye-sensitized solar cells (DSSC) use dye as the photoactive material, which capture the incoming photon of light and use the energy to excite electrons. Research in DSSCs has centered around improving the efficacy of photosensitive dyes. A fruit's color is defined by a unique set of molecules, known as a pigment profile, which changes as a fruit progresses from ripe to rotten. This project investigates the use of fresh and rotten fruit extracts as the photoactive dye in a DSSC.
Every year, around 40% of undergraduate students in the United States discontinue their studies, resulting in a loss of valuable education for students and a loss of money for colleges. Even so, colleges across the nation struggle to discover the underlying causes of these high dropout rates. In this paper, the authors discuss the use of machine learning to find correlations between the built environment factors and the retention rates of colleges. They hypothesized that one way for colleges to improve their retention rates could be to improve the physical characteristics of their campus to be more pleasing. The authors used image classification techniques to look at images of colleges and correlate certain features like colors, cars, and people to higher or lower retention rates. With three possible options of high, medium, and low retention rates, the probability that their models reached the right conclusion if they simply chose randomly was 33%. After finding that this 33%, or 0.33 mark, always fell outside of the 99% confidence intervals built around their models’ accuracies, the authors concluded that their machine learning techniques can be used to find correlations between certain environmental factors and retention rates.
The application of machine learning techniques has facilitated the automatic annotation of behavior in video sequences, offering a promising approach for ethological studies by reducing the manual effort required for annotating each video frame. Nevertheless, before solely relying on machine-generated annotations, it is essential to evaluate the accuracy of these annotations to ensure their reliability and applicability. While it is conventionally accepted that there cannot be a perfect annotation, the degree of error associated with machine-generated annotations should be commensurate with the error between different human annotators. We hypothesized that machine learning supervised with adequate human annotations would be able to accurately predict body parts from video sequences. Here, we conducted a comparative analysis of the quality of annotations generated by humans and machines for the body parts of sheep during treadmill walking. For human annotation, two annotators manually labeled six body parts of sheep in 300 frames. To generate machine annotations, we employed the state-of-the-art pose-estimating library, DeepLabCut, which was trained using the frames annotated by human annotators. As expected, the human annotations demonstrated high consistency between annotators. Notably, the machine learning algorithm also generated accurate predictions, with errors comparable to those between humans. We also observed that abnormal annotations with a high error could be revised by introducing Kalman Filtering, which interpolates the trajectory of body parts over the time series, enhancing robustness. Our results suggest that conventional transfer learning methods can generate behavior annotations as accurate as those made by humans, presenting great potential for further research.
In the United States, there are currently 17.8 million affected by atopic dermatitis (AD), commonly known as eczema. It is characterized by itching and skin inflammation. AD patients are at higher risk for infections, depression, cancer, and suicide. Genetics, environment, and stress are some of the causes of the disease. With the rise of personalized medicine and the acceptance of gene-editing technologies, AD-related variations need to be identified for treatment. Genome-wide association studies (GWAS) have associated the Filaggrin (FLG) gene with AD but have not identified specific problematic single nucleotide polymorphisms (SNPs). This research aimed to refine known SNPs of FLG for gene editing technologies to establish a causal link between specific SNPs and the diseases and to target the polymorphisms. The research utilized R and its Bioconductor packages to refine data from the National Center for Biotechnology Information's (NCBI's) Variation Viewer. The algorithm filtered the dataset by coding regions and conserved domains. The algorithm also removed synonymous variations and treated non-synonymous, frameshift, and nonsense separately. The non-synonymous variations were refined and ordered by the BLOSUM62 substitution matrix. Overall, the analysis removed 96.65% of data, which was redundant or not the focus of the research and ordered the remaining relevant data by impact. The code for the project can also be repurposed as a tool for other diseases. The research can help solve GWAS's imprecise identification challenge. This research is the first step in providing the refined databases required for gene-editing treatment.
One-third of the world's people do not have access to clean drinking water. Nadella and Nadella tackle this issue by testing a low-cost filtration system for removing heavy metal and bacteria from water.
Using the European Space Agency’s Gaia dataset, the authors analyzed the relationship between white dwarfs’ magnitudes and proper motions. They hypothesized that older white dwarf stars may have different velocities than younger ones, possibly that stars slow down as they age. They found that the white dwarfs in the dataset were substantially redder and higher magnitude (traits traditionally associated with older stars) as compared to their non-fast counterparts.
Plastic debris can disrupt marine ecosystems, spread contaminants, and take years to naturally degrade. In this study, Wu et al aim to establish an understanding of the scope of Williamston, Michigan’s microplastics problem, as well as to attempt to find the source of these plastics. Initially, the authors hypothesize that the Williamston Wastewater Treatment Plant was the primary contributor to Williamston’s microplastics pollution. Although they find a general trend of increasing concentrations of microplastics from upstream to downstream, they do not pinpoint the source of Williamston’s microplastics pollution in the present research.