Post-harvest technologies
S. Sharifi; M. H. Aghkhani; A. Rohani
Abstract
IntroductionOn the field and in the paddy milling factory dryer losses have always been challenging issues in the rice industry. Different forms of losses in brown rice may occur depending on the field and factory conditions. To reduce the losses, proper management during pre-harvest, harvesting, and ...
Read More
IntroductionOn the field and in the paddy milling factory dryer losses have always been challenging issues in the rice industry. Different forms of losses in brown rice may occur depending on the field and factory conditions. To reduce the losses, proper management during pre-harvest, harvesting, and post-harvest operations is essential. In this study, different on-field drying and tempering methods were investigated to detect different forms of brown rice losses.Materials and MethodsThe present study was conducted on the most common Hashemi paddy variety during the 2019-2020 season in Talesh, Rezvanshahr, and Masal cities in the Guilan province, Iran with 0.2 hectares and 5 paddy milling factory dryers. On the fields, the method and date of tillage, irrigation, and transplanting used in all experimental units were the same. Moreover, the same amount of fertilizer and similar spraying methods were used across all experiments. For the pre-drying process on the fields, the following three pre-drying methods were applied on the harvest day: A1) The paddies were spread on the cut stems for insolating, A2) The paddies were stacked and stored after being placed on the cut stems for 5h, and A3) The paddies were covered with plastic wrap and stored after 5h of insolating. The first method (A1) is the most common in the area and was chosen as the control treatment. For the second step of the process, the time interval between the on-field pre-drying and threshing was considered: B1) 14 to 19h post-harvest; B2) 20 to 24h post-harvest, and B3) 25 to 29h post-harvest. Afterward, methods A1 to A3 were combined with methods B1 to B3 and feed into an axial flow-thresher at 10 kg min-1, 550 rpm PTO, and two levels of moisture content at 19 and 26 percent (% w.b). The third process was two-stage or three-stage tempering for 10 or 15 hours resulting in four levels (C1 to C4) and was done in the conventional batch type dryer under temperatures of 40 and 50 ˚C and airspeeds of 0.5 and 0.8 m s-1 in paddy milling factories. At the end of each process, a 100g sample was oven-dried for 48h and a microscope achromatic objective 40x was used to detect incomplete horizontal or vertical cracks, tortoise pattern cracks, and immature and chalky grains. The equilibrium moisture content was determined to be 7.3 percent. Losses properties were analyzed using a completely randomized factorial design with a randomized block followed by Tukey's HSD test at the 5% probability and comparisons among the three replications were made.Results and DiscussionResults demonstrated that the stack and plastic drying methods significantly increased the percentage of losses. In the plastic drying method, the percentage of chalky grains and tortoise pattern cracks was higher than other forms of loss. In the first process, irrespective of the pre-drying method, the losses were reduced at a lower level of moisture content. At the end of the first stage, losses in the spreading method were significantly lower at 19% moisture content. Threshing the plastic-wrapped paddies after 14 to 19 hours at 19% moisture content resulted in the maximum threshing loss of 8.446% and over half of the grains were chalky or had tortoise pattern cracks. The threshing loss was halved (4.443%) for paddies threshed 25 to 29h after spreading at a moisture content of 26%. The mean of losses in the second step of the process were 7.229, 5.585, and 5.156% for the time interval between the on-field pre-drying and threshing of 14 to 19h, 20 to 24h, and 25 to 29h, respectively. In the last step of the process in paddy milling factory dryers, there was no significant difference in the minimum percent of losses between 10 and 15 hours of three-stage tempering at 40 °C and with 0.5 m s-1 airspeed. Furthermore, maximum total losses with the most incomplete horizontal and vertical cracks occurred in the two-stage 10h tempering at 50 °C and with 0.5 and 0.8 m s-1 airspeed.ConclusionFood security has always been a critical matter in developing countries. Furthermore, identifying the source of losses in the fields and the factories is one way to reduce losses and achieve food security. Stacking or wrapping the paddies in plastic after pre-drying on the fields for 5h is not recommended in terms of its effect on increasing the percentage of brown rice losses. Additionally, due to the importance of factory dryer scheduling in the management of the losses, it is recommended to use a three-stage 10h tempering at 40 °C and with 0.5 m s-1 airspeed.
Bioenergy
J. Rezaeifar; A. Rohani; M. A. Ebrahimi-Nik
Abstract
In the quest for enhanced anaerobic digestion (AD) performance and stability, iron-based additives as micro-nutrients and drinking water treatment sludge (DWTS) emerge as key players. This study investigates the kinetics of methane production during AD of dairy manure, incorporating varying concentrations ...
Read More
In the quest for enhanced anaerobic digestion (AD) performance and stability, iron-based additives as micro-nutrients and drinking water treatment sludge (DWTS) emerge as key players. This study investigates the kinetics of methane production during AD of dairy manure, incorporating varying concentrations of Fe and Fe3O4 (10, 20, and 30 mg L-1) and DWTS (6, 12, and 18 mg L-1). Leveraging an extensive library of non-linear regression (NLR) models, 26 candidates were scrutinized and eight emerged as robust predictors for the entire methane production process. The Michaelis-Menten model stood out as the superior choice, unraveling the kinetics of dairy manure AD with the specified additives. Fascinatingly, the findings revealed that different levels of DWTS showcased the highest methane production, while Fe3O420 and Fe3O430 recorded the lowest levels. Notably, DWTS6 demonstrated approximately 34% and 42% higher methane production compared to Fe20 and Fe3O430, respectively, establishing it as the most effective treatment. Additionally, DWTS12 exhibited the highest rate of methane production, reaching an impressive 147.6 cc on the 6th day. Emphasizing the practical implications, this research underscores the applicability of the proposed model for analyzing other parameters and optimizing AD performance. By delving into the potential of iron-based additives and DWTS, this study opens doors to revolutionizing methane production from dairy manure and advancing sustainable waste management practices.
Bioenergy
M. Zarei; M. R. Bayati; M. A. Ebrahimi-Nik; B. Hejazi; A. Rohani
Abstract
IntroductionAnaerobic bacteria break down organic materials like animal manure, household trash, plant wastes, and sewage sludge during the anaerobic digestion process of biological materials and produce biogas. One of the main issues in using biogas is hydrogen sulfide (H2S), which can corrode pipelines ...
Read More
IntroductionAnaerobic bacteria break down organic materials like animal manure, household trash, plant wastes, and sewage sludge during the anaerobic digestion process of biological materials and produce biogas. One of the main issues in using biogas is hydrogen sulfide (H2S), which can corrode pipelines and engines in concentrations between 50 and 10,000 ppm. One method for removing H2S from biogas with minimal investment and operation costs is biofiltration. Whether organic or inorganic, the biofilter's bed filling materials must adhere to certain standards including high contact surface area, high permeability, and high absorption. In this study, biochar and compost were used as bed particles in the biofilter to study the removal of H2S from the biogas flow in the lab. Afterward, kinetic modeling was used to describe the removal process numerically.Material and MethodsTo remove H2S from the biogas, a lab-sized biofilter was constructed. Biochar and compost were employed separately as the material for the biofilter bed. Because of its high absorption capacity and porosity, biochar is a good choice for substrate and packed beds in biofilters. The biochar pieces used were broken into 10 mm long cylindrical pieces with a diameter of 5 mm. Compost was used as substrate particles because it contains nutrients for microorganisms. Compost granules with an average length of 7.5 mm and 3 mm in diameter were used in this study. For the biofilter reactor, each of these substrates was put inside a cylinder with a diameter of 6 cm and a height of 60 cm. The biofilter's bottom is where the biogas enters, and its top is where it exits. During the experiment, biogas flowed at a rate of 72 liters per hour. Mathematical modeling was used to conduct kinetic studies of the process to better comprehend and generalize the results. This method involves feeding the biofilter column with biogas that contains H2S while the biofilm is present on the surface of the biofilter bed particles. The bacteria in the biofilm change the gaseous H2S into the harmless substance sulfur and store it in their cells. The assumptions that form the foundation of the mathematical models are: the H2S concentration is uniform throughout the gas flow, the gas flow is constant, and the column's temperature is constant at a specific height.Results and DiscussionIn the beginning, biochar was used as a substrate in the biofilter to test its effectiveness, and the results obtained for removing H2S from the biogas were acceptable. H2S concentration in biogas was significantly reduced using biochar beds. It dropped from 300 ppm and 200 ppm to 50 ppm where the greatest H2S concentration reduction was achieved. The level of Methane in the biogas was not significantly impacted by the biofilter. This is regarded as a significant outcome when taking into account the goal which is producing biogas with a high concentration of methane. The H2S elimination effectiveness was 94% with the biochar bed and biogas input with 185 ppm H2S concentration. The removal efficiency reached 76% with the compost bed and input concentration of 70 ppm. Using mathematical models, the simulation was carried out by modifying the model's parameters until the predicted results closely matched the experimental data. It may be concluded that the suggested mathematical model is sufficient for the quantitative description of H2S removal from biogas utilizing biofilm in light of how closely the calculation results matched the experimental data. The only model parameter that was changed to make the model results almost identical to the experimental data was the value of the maximum specific growth rate (μmax) which has the greatest influence on the model results. The value of μmax for the biochar bed was calculated as 0.0000650 s-1 and for the compost bed at 70 ppm and 35 ppm concentrations as 0.0000071 s-1 and 0.0000035 s-1, respectively.ConclusionThe primary objective of this study is to examine the removal of H2S from biogas using readily available and natural substrates. According to the findings, at a height of 60 cm, H2S concentration in biochar and compost beds decreased from 185 ppm to 11 ppm (removal efficiency: 94%) and from 70 ppm to 17 ppm (removal efficiency: 76%), respectively. The mathematical models that were created can quantify the H2S elimination process, and the μmax values in biochar and compost were calculated as 0.0000650 s-1 and 0.0000052 s-1, respectively.AcknowledgmentThe authors would also like to thank UNESCO for providing some of the instruments used in this study under grant number No. 18-419 RG, funded by the World Academy of Sciences (TWAS).
Bioenergy
M. Kamali; R. Abdi; A. Rohani; Sh. Abdollahpour; S. Ebrahimi
Abstract
IntroductionSince anaerobic digestion leads to the recovery of energy and nutrients from waste, it is considered the most sustainable method for treating the organic fraction of municipal solid wastes.However, due to the long solid retention time in the anaerobic digestion process, the low performance ...
Read More
IntroductionSince anaerobic digestion leads to the recovery of energy and nutrients from waste, it is considered the most sustainable method for treating the organic fraction of municipal solid wastes.However, due to the long solid retention time in the anaerobic digestion process, the low performance of the process in biogas production as well as the uncertainty related to the safety of digested materials for utilizing in agriculture, applying different pretreatments is recommended.Thermal pretreatment is one of the most common pretreatment methods and has been used successfully on an industrial scale. Very little research, nevertheless, has been done on the effects of different temperatures and durations of thermal pretreatment on the enhancement of anaerobic digestion of the organic fraction of municipal solid wastes (OFMSW). The main effect of thermal pretreatment is the rapturing cell membrane and dissolving organic components. Thermal pretreatment at temperatures above 170 °C may result in the formation of chemical bonds that lead to particle agglomeration and can cause the loss of volatile organic components and thus reduce the potential for methane production from highly biodegradable organic waste. Therefore, since thermal pretreatment at temperatures above 100 °C and high pressure requires more energy and more sophisticated equipment, thermal pretreatment of organic materials at low temperatures has recently attracted more attention. According to the researchers, thermal pretreatment at temperatures below 100 °C did not lead to the decomposition of complex molecules but the destruction of large molecule clots.The main purpose of this study was to find the optimal levels of pretreatment temperature and time and the most appropriate concentration of digestible materials to achieve maximum biogas production using a combination of the Box Behnken Response Surface Method to find the objective function followed by optimizing these variables by Genetic Algorithm.Materials and MethodsIn this study, the synthetic organic fraction of municipal solid waste was prepared similar to the organic waste composition of Karaj compost plant. The digestate from the anaerobic digester available in the Material and Energy Research Institute was used as an inoculum for the digestion process. Some characteristics of the raw materials that are effective in anaerobic digestion including the moisture content, total solids, volatile solids of organic waste, and the inoculum were measured. Experimental digesters were set up according to the model used by MC Leod. After size reduction and homogenization, the synthetic organic wastes were subjected to thermal pretreatment (70, 90, 110 °C) at specific times (30, 90, 150 min).The Response Surface methodology has been used in the design of experiments and process optimization. In this study, three operational parameters including pretreatment temperature, pretreatment time, and concentration of organic material (8, 12, and 16%) were analyzed. After extracting the model for biogas efficiency based on the relevant variables, the levels of these variables that maximize biogas production were determined using a Genetic Algorithm.Results and DiscussionThe Reduced Quadratic model, was used to predict the amount of biogas production. The value of the correlation coefficient between the two sets of real and predicted data was more than 0.95. The results suggested that pretreatment time followed by the pretreatment temperature had the greatest contribution (50.86% and 44.81%, respectively) to biogas production. Changes in the organic matter concentration, on the other hand, did not have a significant effect (p ˂ 0.01) on digestion enhancement (1.63%) but were statistically significant at p ˂ 0.10.The response surface diagram showed that the increase in pretreatment time first led to a rise and then a fall in biogas production. The decline in biogas production seemed set to continue with pretreatment time. Meanwhile, the increase in pretreatment temperature from 70 °C to 110 °C first contributed to higher biogas production and then the decrease in gas production occurred. The reason for this fall was probably the browning and Maillard reaction.The regression model was applied as the objective function for variables optimization using the Genetic Algorithm method. Based on the results of this algorithm, the optimal thermal pretreatment for biogas production was determined at 95 °C for 104 minutes and at the concentration of 12%. The expected amount of biogas production by applying the optimal pretreatment conditions was 445 mL-g-1 VS.ConclusionIn this study, the variables including thermal treatment temperature and time as well as the concentration of organic waste to be anaerobically digested were optimized to achieve the highest biogas production from anaerobic digestion.Statistical analysis of the results revealed that the application of thermal pretreatment increased biogas production considerably. According to the regression model, the contribution of pretreatment time and temperature to biogas production was significant (50.86% and 44.81% respectively). In stark contrast, varying substrate concentrations in the range of 8 to 16% had a smaller effect (1.63%) on biogas production. The results of this study also showed that the best pretreatment temperature and time were 95 °C and 104 minutes, respectively, at a concentration of 12% by generating 445 mL-g-1 VS biogas which is 31.17% higher than the biogas yield from anaerobic digestion of untreated organic wastes at this concentration.
M. Hamdani; M. Taki; M. Rahnama; A. Rohani; M. Rahmati-Joneidabad
Abstract
IntroductionControlling greenhouse microclimate not only influences the growth of plants, but is also critical in the spread of diseases inside the greenhouse. The microclimate parameters are inside air, roof, crop and soil temperature, relative humidity, light intensity, and carbon dioxide concentration. ...
Read More
IntroductionControlling greenhouse microclimate not only influences the growth of plants, but is also critical in the spread of diseases inside the greenhouse. The microclimate parameters are inside air, roof, crop and soil temperature, relative humidity, light intensity, and carbon dioxide concentration. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models and also artificial neural networks (ANNs) are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Usually thermal simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. So the main objective of this paper is comparison between two types of artificial neural networks (MLP and RBF) for prediction 4 inside variables in an even-span glass greenhouse and help the development of simulation science in estimating the inside variables of intelligent greenhouses.Materials and MethodsIn this research, different sensors were used for collecting the temperature, solar, humidity and wind data. These sensors were used in different positions inside the greenhouse. After collecting the data, two types of ANNs were used with LM and Br training algorithms for prediction the inside variables in an even-span glass greenhouse in Mollasani, Ahvaz. MLP is a feed-forward layered network with one input layer, one output layer, and some hidden layers. Every node computes a weighted sum of its inputs and passes the sum through a soft nonlinearity. The soft nonlinearity or activity function of neurons should be non-decreasing and differentiable. One type of ANN is the radial basis function (RBF) neural network which uses radial basis functions as activation functions. An RBF has a single hidden layer. Each node of the hidden layer has a parameter vector called center. This center is used to compare with the network input vector to produce a radially symmetrical response. Responses of the hidden layer are scaled by the connection weights of the output layer and then combined to produce the network output. There are many types of cross-validation, such as repeated random sub-sampling validation, K-fold cross-validation, K×2 cross-validation, leave-one-out cross-validation and so on. In this study, we pick up K-fold cross- validation for selecting parameters of model. The K-fold cross-validation is a technique of dividing the original sample randomly into K sub-samples. Different performance criteria have been used in literature to assess model’s predictive ability. The mean absolute percentage error (MAPE), root mean square error (RMSE) and coefficient of determination (R2) are selected to evaluate the forecast accuracy of the models in this study.Results and Discussion The results of neural networks optimization models with different networks, dependent on the initial random values of the synaptic weights. So, the results in general will not be the same in two different trials even if the same training data have been used. So in this research K-fold cross validation was used and different data samples were made for train and test of ANN models. The results showed that trainlm for both of MLP and RBF models has the lower error than trainbr. Also MLP and RBF were trained with 40 and 80% of total data and results indicated that RBF has the lowest sensitivity to the size data. Comparison between RBF and MLP model showed that, RBF has the lowest error for prediction all the inside variables in greenhouse (Ta, Tp, Tri, Rha). In this paper, we tried to show the fact that innovative methods are simple and more accurate than physical heat and mass transfer method to predict the environment changes. Furthermore, this method can use to predict other changes in greenhouse such as final yield, evapotranspiration, humidity, cracking on the fruit, CO2 emission and so on. So the future research will focus on the other soft computing models such as ANFIS, GPR, Time Series and … to select the best one for modeling and finally online control of greenhouse in all climate and different environment.ConclusionThis research presents a comparison between two models of Artificial Neural Network (RBF-MLP) to predict 4 inside variables (Ta, Tp, Tri, Rha) in an even-span glass greenhouse. Comparison of the models indicated that RBF has lower error. The range of RMSE and MAPE factors for RBF model to predict all inside variables were between 0.25-0.55 and 0.60-1.10, respectively. Besides the results showed that RBF model can estimate all the inside variables with small size of data for training. Such forecasts can be used by farmers as an appropriate advanced notice for changes in temperatures. Thus, they can apply preventative measures to avoid damage caused by extreme temperatures. More specifically, predicting a greenhouse temperature can not only provide a basis for greenhouse environmental management decisions that can reduce the planting risks, but also could be as a basic research for the feedback-feed-forward type of climate control strategy.
A. Vaysi; A. Rohani; M. Tabasizadeh; R. Khodabakhshian; F. Kolahan
Abstract
Introduction In recent years, with development of industrial products with complex and precise systems, the demand for CNC machines has been increasing, and as its technology has been progressed, more failure modes have been developed with complex and multi-purpose structures. The necessity of CNC machines’ ...
Read More
Introduction In recent years, with development of industrial products with complex and precise systems, the demand for CNC machines has been increasing, and as its technology has been progressed, more failure modes have been developed with complex and multi-purpose structures. The necessity of CNC machines’ reliability is also more evident than ever due to its impact on production and its implementation costs. Aiming at reducing the risks and managing the performance of the CNC machine parts in order to increase the reliability and reduce the stop time, it is important to identify all of the failure modes and prioritize them to determine the critical modes and take the proper cautionary maintenance actions approach. Materials and Methods In this study, conventional and fuzzy FMEA, which is a method in the field of reliability applications, was used to determine the risks in mechanical components of CNC lathe machine and all its potential failure modes. The extracted information was mainly obtained by asking from CNC machine experts and analysts, who provided detailed information about the CNC machining process. These experts used linguistic terms to prioritize the S, O and D parameters. In the conventional method, the RPN numbers were calculated and prioritized for different subsystems. Then in the fuzzy method, first the working process of the CNC machine and the mechanism of its components were studied. Also, in this step, all failure modes of mechanical components of the CNC and their effects were determined. Subsequently, each of the three parameters S, O, and D were evaluated for each of the failure modes and their rankings. For ranking using the crisp data, usually, the numbers in 1-10 scale are used, then using linguistic variables, the crisp values are converted into fuzzy values (fuzzification). 125 rules were used to control the output values for correcting the input parameters (Inference). For converting input parameters to fuzzy values and transferring qualitative rules into quantitative results, Fuzzy Mamdani Inference Algorithm was used (Inference). In the following, the inference output values are converted into non-fuzzy values (defuzzification). In the end, the fuzzy RPNs calculated by the fuzzy algorithm and defuzzified are ranked. Results and Discussion In conventional FMEA method, after calculating the RPNs and prioritizing them, the results showed that this method grouped 30 subsystems into 30 risk groups due to the RPN equalization of some subsystems, while it is evident that by changing the subsystem, the nature of its failure and its severity would vary. Therefore, this result is not consistent with reality. According to the weaknesses of this method, fuzzy logic was used for better prioritization. In the fuzzy method, the results showed that, in the 5-point scale, with the Gaussian membership function and the Centroid defuzzification method, it was able to prioritize subsystems in 30 risk groups. In this method, gearboxes, linear guideway, and fittings had the highest priority in terms of the criticality of failure, respectively. Conclusion The results of the fuzzy FMEA method showed that, among the mechanical systems of CNC lathe machine, the axes components and the lubrication system have the highest FRPNs and degree of criticality, respectively. Using the fuzzy FMEA method, the experts' problems in prioritizing critical modes were solved. In fact, using the linguistic variables enabled experts to have a more realistic judgment of CNC machine components, and thus, compared to the conventional method, the results of the prioritization of failure modes are more accurate, realistic and sensible. Also, using this method, the limitations of the conventional method were reduced, and failure modes were prioritized more effectively and efficiently. Fuzzy FMEA is found to be an effective tool for prioritizing critical failure modes of mechanical components in CNC lathe machines. The results can also be used in arranging maintenance schedule to take corrective measures, and thereby, it can increase the reliability of the machining process.
Modeling
J. Baradaran Motie; M. H. Aghkhani; A. Rohani; A. Lakziyan
Abstract
Introduction Presently, the loss of ground water levels and the increase in dissolved salts have given importance to the determination of salinity and the management of their variations in irrigated farms. Soil electrical conductivity is an indirect method to measure soil salts. The direct electrode ...
Read More
Introduction Presently, the loss of ground water levels and the increase in dissolved salts have given importance to the determination of salinity and the management of their variations in irrigated farms. Soil electrical conductivity is an indirect method to measure soil salts. The direct electrode contact method (Wenner method) is one of the widely used methods to rapidly measure soil ECa in farms. However, soil scientists prefer soil actual electrical conductivity (saturated extract electrical conductivity) (ECe) as an indicator of soil salinity, though its measurement is only possible in the laboratory. The aim of this study was to find a relationship between the prediction of soil actual electrical conductivity (ECe) in terms of temperature, moisture, bulk density and apparent electrical conductivity of soil (ECa). Thereby, the estimation of ECe would allow the partial calculation of ECa that is dependent upon soil salinity and dissolved salts. Materials and Methods This study used RBF neural network in Box-Behnken statistical design to explore the impacts of effective parameters on direct contact method in the measurement of soil ECa and provided a model to estimate ECe from ECa, temperature, moisture content and bulk density. In this study soil apparent electrical conductivity (ECa) was measured by direct contact (Wenner) method. The present study considered four most effective factors: ECa (saturated paste extract EC), moisture, bulk density, and temperature (Baradaran Motie et al., 2010). Given the characteristics of farming soils in Khorasan Razavi Province (Iran), the maximum and minimum of each independent variable were assumed as 0.5-6 mS.cm-1 for ECe, 5-25% for moisture content, 1-1.8 g.cm-3 for bulk density, and 2-37°C for soil temperature. Considering the experimental design, three moisture levels (5, 15 and 25%), three salinity levels (0.5, 3.25 and 6 mS.cm-1), three temperature levels (2, 19 and 37°C) and three compaction levels with bulk densities of 1, 1.4 and 1.8 g.cm-3 were assumed in 27 trials with predetermined arrangement on the basis of Box-Behnken technique. 13 common algorithms were explored in MATLAB software package for the training of the artificial neural network in order to find the optimum algorithm (Table 4). The input layer of the network designed by integrating a Randomized Complete Block Design (RCBD) with k-fold cross-validation. Using k-fold cross-validation, 20 different datasets were generated for training and validation of RBF neural network. Results and Discussion A combination of an RCBD and k-fold cross-validation was used. The results of both training and validation phases should be considered in the selection of training algorithm. In addition, R2 of T1 training algorithm had a much lower standard deviation than other training algorithms. The lower standard deviation is, the more capable the algorithm would be in learning from different datasets. Considering all aspects, trainbr (T2) training algorithm was found to have the best performance among all 13 training algorithms of the neural network. Table 7 tabulates the results of means comparison for R2 of RBF model for both training and validation phases resulted from the application of some combinations of S and L2 factors as interaction. As can be observed, R2 = 0.99 for all of them with no significant difference. However, the magnitude of order differed between training and validation phases. Given the importance of the training phase, L2=9 and S=0.1 were regarded as the optimum values. The sensitivity analysis of the network revealed that soil ECa, moisture, bulk density, and temperature had the highest to lowest impact on the estimation of soil ECe, respectively. This model can improve the precision of soil ECa measurement systems in the estimation and preparation of soil salinity maps. Furthermore, this model can save in time of data analysing and soil EC mapping because it does not need data recollection for the calibration of systems. A validation prose was done with a 60 field collected data set. The results of validation show R2=0.986 between predicted and measured ECa. Conclusion The present research focused on improving the precision of soil ECe measurement on the basis of easily accessible parameters (ECa, temperature, moisture, and bulk density). In conventional methods of soil EC mapping, the systems only measure soil ECa and then calibrate it to ECe by collecting some samples and using statistical methods. In this study, Soil ECe was estimated with R2 = 0.99 by a multivariate artificial neural network model with the inputs, including ECa, temperature, moisture, and bulk density of soil without any need to collect further soil samples and calibration process. The Bayesian training algorithm was introduced as the best training algorithm for this neural network. Thereby, soil EC variation maps can be prepared with higher precision to estimate the spatial spread of salinity in farms. Also, the results imply that soil ECa, moisture, bulk density and temperature have the highest to lowest effectiveness on the estimation of soil ECe, respectively.
Design and Construction
M. A. Ebrahimi-Nik; A. Rohani
Abstract
Introduction More than 40 percent of the world population is now dependent on biomass as their main source of energy for cooking. In Iran, the lack of access roads and inefficient transportation structure have made some societies to adopt biomass as the main energy source for cooking. In such societies, ...
Read More
Introduction More than 40 percent of the world population is now dependent on biomass as their main source of energy for cooking. In Iran, the lack of access roads and inefficient transportation structure have made some societies to adopt biomass as the main energy source for cooking. In such societies, inefficient traditional three-wall cook stoves (TCS) are the sole method of cooking with biomass, which corresponds to the large fuel consumption and smoke emission. Biomass gasifier cook stoves have been on the focus of many studies as a solution for such regions. In these stoves, biomass is pyrolized with the supply of primary air. The pyrolysis vapors are then mixed with secondary air in a combustion chamber where a clean flame forms. In this study, a biomass cook stove was manufactured and its performance was evaluated feeding with three kind of biomass wastes (e.g. almond shell, wood chips, and corn cob). Materials and Methods A natural draft semi-gasifier stove was manufactured based on the stove proposed by (Anderson et al., 2007). It had two concentric metal cylinders with two sets of primary and secondary air inlet holes. It had 305 mm height and 200 mm diameter. The stove was fed by wood chips, almond shell, and corn cob. Thermal performance of the stove was evaluated based on the standard for water boiling test. It consisted of three phases of cold start, hot start, and simmering. Time to boil, burning rate, and fire power was measured in minute. A “K” type thermocouple was used to measure the water temperature. Emission of carbon monoxide from the stove was measured in three situations (e.g. open area, kitchen without hood, and kitchen under hood) using CO meter (CO110, Thaiwan). Results and Discussion Neither particulate matter nor smoke was visually observed during the stove operation except at the final seconds when the stove was going to run out of fuel. The flame color was yellow and partly blue. The average time to boil was 15 min; not significantly longer than that of the LPG stove (13 min). Time to boil in hot phase was almost the same for all fuels which is not in line with the studies reported by (Kshirsagar and Kalamkar, 2014; Ochieng et al., 2013; Parmigiani et al., 2014). This is probably due to the stove body material. In fact, the hot phase test, aims to show the effect of the stove body temperature on the performance. In contrast with the most of the stoves, the one was used in the present study was made of a thin (0.3 mm) iron sheet which has a high heat transfer and low heat capacity. This results in a rapid increase in the stove body temperature up to its highest possible. The longest flaming duration (51 min) was observed by 350 g almond shell. Thermal efficiency on the other hand, was different in using different biomass fuels. The average thermal efficiency of 40.8 was achieved by the stove which is almost three times of open fire. The results from emission test showed that the average of carbon monoxide surrounding the operator in the case of open area, kitchen without hood, kitchen under hood, and traditional open fire were 4.7, 7.5, 5.2, and 430 ppm, respectively. Conclusion The amount of carbon monoxide emitted to the room is in accordance with the US National ambient air quality standards (NAAQS) hence, compared with traditional methods of cooking in deprived regions, the stove burns cleaner with higher efficiency. In order to prohibit respiratory decreases in housekeeping women, this stove could be disseminated in some deprived regions of Iran.
R. Goudarzi; H. Sadrnia; A. Rohani; M. Nouribaygi
Abstract
Introduction The demand of pre-determined optimal coverage paths in agricultural environments have been increased due to the growing application of field robots and autonomous field machines. Also coverage path planning problem (CPP) has been extensively studied in robotics and many algorithms have been ...
Read More
Introduction The demand of pre-determined optimal coverage paths in agricultural environments have been increased due to the growing application of field robots and autonomous field machines. Also coverage path planning problem (CPP) has been extensively studied in robotics and many algorithms have been provided in many topics, but differences and limitations in agriculture lead to several different heuristic and modified adaptive methods from robotics. In this paper, a modified and enhanced version of currently used decomposition algorithm in robotics (boustrophedon cellular decomposition) has been presented as a main part of path planning systems of agricultural vehicles. Developed algorithm is based on the parallelization of the edges of the polygon representing the environment to satisfy the requirements of the problem as far as possible. This idea is based on "minimum facing to the cost making condition" in turn, it is derived from encounter concept as a basis of cost making factors. Materials and Methods Generally, a line termed as a slice in boustrophedon cellular decomposition (BCD), sweeps an area in a pre-determined direction and decomposes the area only at critical points (where two segments can be extended to top and bottom of the point). Furthermore, sweep line direction does not change until the decomposition finish. To implement the BCD for parallelization method, two modifications were applied in order to provide a modified version of the boustrophedon cellular decomposition (M-BCD). In the first modification, the longest edge (base edge) is targeted, and sweep line direction is set in line with the base edge direction (sweep direction is set perpendicular to the sweep line direction). Then Sweep line moves through the environment and stops at the first (nearest) critical point. Next sweep direction will be the same as previous, If the length of those polygon's newly added edges, during the decomposition, are less than or equal to the base edge, otherwise a search is needed to choose a new base edge. This process is repeated until a complete coverage. The second modification is cutting the polygon in the location of the base edge to generate several ideal polygons beside the base edges. The algorithm was applied to a dataset (including 18 cases, ranging from simple-shaped to complex-shaped polygons) gathered from other studies and was compared with a split-merge algorithm which has been used in some other studies. The M-BCD algorithm was coded in C++ language using Microsoft Visual Studio 2013 software. Algorithm was run on a laptop with 2.5 GHz Intel(R) core™ i5-4200M CPU, processor with 4 GB of RAM. Also Split-merge algorithm provided by Driscoll (2011) was coded. Two algorithms were applied to the dataset. Cost of coverage plan was calculated using cost function of U-shaped turns in study Jin and Tang (2010). In this paper machine-specific parameters were working width 10 m and minimum turning radius 5 m. Results and Discussion Based on the results, the proposed algorithm has low computational time (below 100 ms in dataset and runs many times (on average 75 times) faster than split-merge algorithm. Algorithm resulted in a calculated savings up to 12% and on average 2% than the split-merge algorithm. Another consequence from parallelization method was effectiveness of multi-optimal direction coverage pattern than a single-optimal direction coverage; a calculated savings up to 14% and 2% on average than a single optimal direction achieved. Algorithm was evaluated on several test cases in detail. Based on the results, it is possible to loose optimal solutions especially in the case of simple shaped environments (in terms of number of convex points and internal obstacles), for example case 10 in dataset, is a case with a number of orthogonal edges. Reviewing the algorithm and Figure 4 demonstrate that sweep line moves down from the first longest edge in top of the polygon, and it doesn't stop during the process until the whole area is covered with a single coverage path direction (parallel to the longest edge). As it can be seen, no decomposition is proposed, because sweep line has faced no critical points. Based on the results in Table 2, there is 8% (equal to 88m) more cost (in term of the non-effective distance) in this case than an optimal direction and the split-merge algorithm. There are similar cases in the dataset: number 9, 12 and 13. This condition rarely occurs in complex environments, but in general it can be prevented by using an evaluation step at the end of the decomposition process. Ideally, the cost of coverage plan must be significantly less than related costs of a single optimal direction. Unlike the simple cases, algorithm returns near the optimal solution, especially in the case of complex environments. A good example of this ability of the algorithm can be seen in Figure 6. This field is case 17 in the dataset. It has many edges (almost 90 edges) and several non-convex points and an internal irregular shaped obstacle. M-BCD algorithm in a very short time (87 ms) generated several near to ideal shaped sub-regions around the field. Algorithm resulted in a calculated saving of 5% than an optimal direction with minimum non-effective distance. We can see the solution of split-merge algorithm by Oksanen and Visala (2009) in Figure 6, it can be clearly seen that coverage pattern by M-BCD is very close to the high time-consuming and optimal split-merge algorithm by Oksanen and Visala (2009). It verifies that M-BCD is efficient and optimal. There are similar test cases as hard cases in which considerable savings has been achieved (cases 6, 8 and 14). Conclusion In this paper a modified decomposition algorithm as a main part of path planning systems in agricultural environments was presented. Proposed algorithm uses method of parallelization of the edges of polygon. This method is based on encounter concept and "minimum facing to cost making condition". Although the general problem had been proved to be NP-hard problem, the method has limited the search space correctly and effectively which resulted close to the optimal solutions quickly. Another advantage of the method is suitability of the solutions for any kind of machine and any polygonal flat field (and those which can be considered as flat fields).
M. Taki; Y. Ajabshirchi; S. F. Ranjbar; A. Rohani; M. Matloobi
Abstract
Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting ...
Read More
Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Some works were done in past to 2015 year to simulation and predict the inside variables in different greenhouse structures. Usually simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. The main objective of this paper is comparison between heat transfer and regression models to evaluate them to predict inside air and roof temperature in a semi-solar greenhouse in Tabriz University. Materials and Methods In this study, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (geographical location of 38°10′ N and 46°18′ E with elevation of 1364 m above the sea level). In this research, shape and orientation of the greenhouse, selected between some greenhouses common shapes and according to receive maximum solar radiation whole the year. Also internal thermal screen and cement north wall was used to store and prevent of heat lost during the cold period of year. So we called this structure, ‘semi-solar’ greenhouse. It was covered with glass (4 mm thickness). It occupies a surface of approximately 15.36 m2 and 26.4 m3. The orientation of this greenhouse was East–West and perpendicular to the direction of the wind prevailing. To measure the temperature and the relative humidity of the air, soil and roof inside and outside the greenhouse, the SHT 11 sensors were used. The accuracy of the measurement of temperature was ±0.4% at 20 °C and the precision measurement of the moisture was ±3% for a clear sky. We used these sensors in soil, on the roof (inside greenhouse) and in the air of greenhouse and outside to measure the temperature and relative humidity. At a 1 m height above the ground outside the greenhouse, we used a pyranometre type TES 1333. Its sensitivity was proportional to the cosine of the incidence angle of the radiation. It is a measure of global radiation of the spectral band solar in the 400–1110 nm. Its measurement accuracy was approximately ±5%. Some heat transfer models used to predict the inside and roof temperature are according to equation (1) and (5): Results and Discussion Results showed that solar radiation on the roof of semi-solar greenhouse was higher after noon so this shape can receive high amounts of solar energy during a day. From statistical point of view, both desired and predicted test data have been analyzed to determine whether there are statistically significant differences between them. The null hypothesis assumes that statistical parameters of both series are equal. P value was used to check each hypothesis. Its threshold value was 0.05. If p value is greater than the threshold, the null hypothesis is then fulfilled. To check the differences between the data series, different tests were performed and p value was calculated for each case. The so called t-test was used to compare the means of both series. It was also assumed that the variance of both samples could be considered equal. The variance was analyzed using the F-test. Here, a normal distribution of samples was assumed. The results showed that the p values for heat model in all 2 statistical factors (Comparison of means, and variance) is lower than regression model and so the heat model did not have a good efficient to predict Tri and Ta. RMSE, MAPE, EF and W factor was calculated for to models. Results showed that heat model cannot predict the inside air and roof temperature compare to regression model. Conclusion This article focused on the application of heat and regression models to predict inside air (Ta) and roof (Tri) temperature of a semi-solar greenhouse in Iran. To show the applicability and superiority of the proposed approach, the measured data of inside air and roof temperature were used. To improve the output, the data was first preprocessed. Results showed that RMSE for heat model to predict Ta and Tri is about 1.58 and 6.56 times higher than this factor for regression model. Also EF and W factor for heat model to predict above factors is about 0.003 and 0.041, 0.013 and 0.220 lower than regression model respectively. We propose to use Artificial Neural Network (ANN) and Genetic Algorithm (GA) to predict inside variables in greenhouses and compare the results with heat and regression models.
A. Rohani; S. I. Saedi; H. Gerailue; M. H. Aghkhani
Abstract
Introduction: Fast and accurate determination of geometrical properties of agricultural products has many applications in agricultural operations like planting, cultivating, harvesting and post-harvesting. Calculations related to storing, shipping and storage-coating materials as well as peeling time ...
Read More
Introduction: Fast and accurate determination of geometrical properties of agricultural products has many applications in agricultural operations like planting, cultivating, harvesting and post-harvesting. Calculations related to storing, shipping and storage-coating materials as well as peeling time and surface-microbial concentrations are some applications of estimating product volume and surface area. Sphericity is also a parameter by which the shape differences between fruits, vegetables, grains and seeds can be quantified. This parameter is important in grading systems and inspecting rolling capability of agricultural products. Bayram presented a new dimensional method and equation to calculate the sphericity of certain shapesand some granular food materials (Bayram, 2005). Kumar and Mathew proposed atheoretically soundmethod for estimating the surface area of ellipsoidal food materials (Kumar and Mathew, 2003). Clayton et al. used non-linear regression models for calculation of apple surface area using the fruit mass or volume (Clayton et al., 1995). Humeida and Hobani predicted surface area and volume of pomegranates based on the weight and geometrical diametermean (Humeida and Hobani, 1993). Wang and Nguang designeda low cost sensor system to automatically compute the volume and surface area of axi-symmetricagricultural products such as eggs, lemons, limes and tamarillos (Wang and Nguang, 2007). The main objective of this study was to investigate the potential of Artificial Neural Network (ANN) technique as an alternative method to predict the volume, surface area and sphericity of pomegranates.
Materials and methods: The water displacement method (WDM) was used for measuring the actual volume of pomegranates. Also, the sphericity and surface area are computed by using analytical methods. In this study, the neural MLP models were designed based upon the three nominal diameters of pomegranatesas variable inputs, while the output model consisted of each of the three parameters including the volume, sphericity and surface area. Priorto any ANN training process, the data normalized over the range of [0, 1]. Fig. 1 shows a MLP with one hidden layer. In this study, back-propagation with declininglearning-rate factor (BDLRF) training algorithm was employed. The mean absolute percentage error (MAPE) and the coefficient of determinationof the linear regression line between the predicted values fromthe MLP model and the actual output were used to evaluate the performance of the model.
Results and Discussion: The number of neurons in the hidden layerand also theoptimal values for the learning parameters η and αwere selected bytrial and error method. The bestresult was achieved with five neurons in the hidden layer. The results showed thatthe optimum modelof performance was obtained at constant momentum termequal to 0.8 and learning rate equal to 0.9. In this study, 300 epochs were selected as the starting points of the BDLRF. Some statistical characteristics of the actual values of volume were estimated by WDM, surface area was computed by equation (3) and sphericity of pomegranates was computed by equation (1) and the predicted values of them using the neural network method were shown in Table 1. The obtained results verified that the differences between theactual values and the estimated ones can be ignored. But, the predicted values of the volume using the MLP model in comparison with equation (2) are much closer to the actual values. Statistical comparisons of desired and predicted data and the corresponding p values are given in Table 2. The results showed that P-value was greater than 0.08 in all cases. Therefore, there was no significant difference between the statistical parameters. However, the P-value for equation 2 is much less than that of the MLP model. The results shown in Figures 2, 3 and 4 show that the coefficients of determination between actual and predicted data were greater than 0.9. Considering all the results in our study, the MLP model is more accurate than the WDM and analytical methods.
Conclusions: In this paper, we first measure the actual volume of the pomegranate using WDM and equation (2). Also, assuming an elliptical fruit, the sphericity and surface area are computed analytically based on the three nominal diameters of a pomegranate. Finally, the results of achievements of the MLP designed revealed that the MLP model could be successfully applied to the prediction of thesphericity and surface area. Therefore, the MLP model can be a viable alternative to the analytical methods. However, this is possible only if there is a precise way to compute the three nominal diameters of pomegranates. In addition, according to the MAPE, the accuracy of the MLP model in prediction of volume of pomegranates was twicethe analytical method.
A. Rohani; H. Ghaffari; R. Felehgari; Kh. Mohammadi; H. Masoudi
Abstract
Farm machinery managers often need to make complex economic decisions on machinery replacement. Repair and maintenance costs can have significant impacts on this economic decision. The farm manager must be able to predict farm machinery repair and maintenance costs. This study aimed to identify a regression ...
Read More
Farm machinery managers often need to make complex economic decisions on machinery replacement. Repair and maintenance costs can have significant impacts on this economic decision. The farm manager must be able to predict farm machinery repair and maintenance costs. This study aimed to identify a regression model that can adequately represent the repair and maintenance costs in terms of machine age in cumulative hours of use. The regression model has the ability to predict the repair and maintenance costs for longer time periods. Therefore, it can be used for the estimation of the economic life. The study was conducted using field data collected from 11 John-Deer 955 combine harvesters used in several western provinces of Iran. It was found that power model has a better performance for the prediction of combine repair and maintenance costs. The results showed that the optimum replacement age of John-Deer 955 combine was 54300 cumulative hours.
A. Rohani; H. Makarian
Abstract
With the rise of new powerful statistical techniques and neural networks models, the development of predictive species distribution models has rapidly increased in ecology. In this research, a learning vector quantization (LVQ) and multi layer perceptron (MLP) neural network models have been employed ...
Read More
With the rise of new powerful statistical techniques and neural networks models, the development of predictive species distribution models has rapidly increased in ecology. In this research, a learning vector quantization (LVQ) and multi layer perceptron (MLP) neural network models have been employed to predict, classify and map the spatial distribution of A. repens L. density. This method was evaluated based on data of weed density counted at 550 points of a fallow field located in Faculty of Agriculture, Shahrood University of Technology, Semnan, Iran, in 2010. Some statistical tests, such as comparisions of the means, variance, statistical distribution as well as coefficient of determination in linear regression were used between the observed point sample data and the estimated weed seedling density surfaces by two neural networks to evaluate the performance of the pattern recognition method. Results showed that in the training and test phases non significant different was observed between average, variance, statistical distribution in the observed and the estimated weed density by using LVQ neural network. While this comparisions was significant except statistical distribution by using MLP neural network. In addition, results indicated that trained LVQ neural network has a high capability in predicting weed density with recognition erorr less than 0.64 percent at unsampled points. While, MLP neural network recognition erorr was less than 14.6 percent at unsampled points. The maps showed that, patchy weed distribution offers large potential for using site-specific weed control on this field.