Mapping Farm Assets and Boundaries Using Artificial Intelligence and Machine Learning

Engineering, IT, Mathematics and Statistics

ABOUT THE INDUSTRY PARTNER

Agronomeye is an Australian company that specialises in developing digital twin technology to support sustainable and productive agriculture. Utilising highly detailed mapping data and analytics, Agronomeye’s platform, AgTwin™, provides real-time information to farmers and landowners, helping them make informed decisions about their properties. Their technology is the first of its kind in the Australian agricultural industry and can be used for a variety of applications, including earthworks feasibility, native vegetation analysis, and water flow prediction models. Agronomeye collaborate with agribusiness operators, researchers, and the tech development community to meet the industry’s demand for farm digitisation.

WHAT’S IN IT FOR YOU?

  • The candidate will be operating on AI, ML and LiDAR data, applying cutting edge technology to develop solutions that will have a wide range of applications.
  • This project is likely to push the boundaries of AI application in a real world situation, and is likely to change the way data is collected and mapped.
  • The candidate will be supervised by a professor working at Agronomeye who has over 500 publications and has been ranked in the top 1000 most influential climate change scientists in the world.

RESEARCH TO BE CONDUCTED

  • This project aims to develop a groundbreaking AI and ML solution for automatically identifying and mapping farm-based assets—including troughs, silos, tanks, water bodies, and fences—using high-resolution RGB imagery and LiDAR point cloud data. Traditional methods rely heavily on labor-intensive, manual digitization, which is costly and prone to human error. By integrating advanced computer vision algorithms and deep learning techniques, this project seeks to automate asset recognition, significantly enhancing both accuracy and efficiency.
  • The system will harness the spatial resolution of LiDAR to capture precise terrain and elevation data, while RGB imagery provides color and texture details, allowing for a comprehensive asset identification process. State-of-the-art machine learning models and AI will be used to classify and delineate each asset category with a high degree of precision. Additionally, the model will undergo rigorous training on diverse datasets to ensure robustness and adaptability across varied agricultural landscapes and environmental conditions.
  • This project represents a pioneering effort in applying AI and ML to real-world geospatial mapping in agriculture. The automation of asset mapping could revolutionize data collection methods, offering faster, more cost-effective solutions for farm management, land planning, and environmental monitoring. By eliminating manual digitization, this project not only promises significant time savings but also establishes a new standard for high-accuracy, scalable farm asset mapping, potentially transforming the way agricultural data is collected and utilized globally.

SKILLS WISH LIST

If you’re a postgraduate research student and meet some or all the below we want to hear from you. We strongly encourage women, indigenous and disadvantaged candidates to apply:

  • Machine Learning & Deep Learning: (i) Proficiency in machine learning concepts, especially supervised, unsupervised, and semi-supervised learning methods; (ii) Experience with deep learning models, including CNNs, RNNs, and 3D models like PointNet for handling point cloud data; (iii) Understanding of transfer learning, model optimization, and performance tuning for large datasets.
  • Computer Vision and Image Processing: (i) Expertise in computer vision techniques, such as object detection, segmentation, and feature extraction; (ii) Experience with deep learning frameworks like TensorFlow, PyTorch, and Keras for building CNNs and segmentation models; (iii) Knowledge of advanced image processing techniques (e.g., image augmentation, noise reduction) to work with high-resolution RGB data effectively.
  • Geospatial Data Processing and GIS (Geographic Information Systems): (i) Knowledge of Geographic Information Systems (GIS) for handling spatial data and mapping applications; (ii) Familiarity with geospatial software and tools such as QGIS, ArcGIS, and Google Earth Engine; (iii) Experience in handling spatial data formats (GeoTIFF, Shapefiles) and libraries such as GDAL and Rasterio.
  • LiDAR Data Processing and Point Cloud Analysis: (i) Strong understanding of LiDAR data structures and handling point clouds; (ii) Familiarity with LiDAR processing tools and libraries such as PDAL, CloudCompare, and PCL (Point Cloud Library); (iii) Knowledge of 3D point cloud segmentation techniques and data fusion methods to integrate RGB and LiDAR data.
  • Programming and Software Development: (i) Proficiency in programming languages, such as Python, for model development and data processing; (ii) Familiarity with software engineering principles for model deployment and integration.
  • Domain Knowledge in Agriculture and Asset Mapping: (i) Basic understanding of agricultural landscapes, infrastructure, and asset types (e.g., silos, tanks) to fine-tune models for specific use cases.

RESEARCH OUTCOMES

The project’s outcomes are likely to lead to several innovative research contributions across AI, remote sensing, and geospatial analytics, particularly as they apply to agricultural and environmental management.

  • (i) Automated Asset Detection Models for Rural Landscapes: Development of state-of-the-art models for detecting specific agricultural assets such as troughs, silos, tanks, water bodies, and fences; publication of a new methodology for fusing RGB imagery with LiDAR data to improve object detection and segmentation accuracy in diverse environments.
  • (ii) Advanced Multi-Modal Data Fusion Techniques: Novel approaches for combining high-resolution RGB imagery and LiDAR point cloud data, optimizing accuracy and computational efficiency; contributions to multi-modal data fusion research by showcasing the benefits of integrating spectral and spatial information in asset mapping.
  • (iii) Efficient Training Algorithms for Large-Scale Geospatial Datasets: Development of training protocols and model architectures capable of handling extensive geospatial data, improving the scalability of AI solutions in agriculture; potential insights into transfer learning or semi-supervised learning techniques to reduce data requirements for training AI on rural assets.
  • (iv) Benchmark Dataset and Evaluation Metrics for Agricultural Asset Mapping: Creation of a publicly available, labeled dataset of agricultural assets captured from RGB and LiDAR data, establishing a benchmark for future research; design of evaluation metrics tailored to assess the accuracy and robustness of AI models in detecting rural assets across varying terrains and conditions.

ADDITIONAL DETAILS

The intern will receive $3,000 per month of the internship, usually in the form of scholarship payments.

It is expected that the intern will primarily undertake this research project during regular business hours and maintain contact with their academic mentor throughout the internship either through face-to-face or phone meetings as appropriate.

The intern and their academic mentor will have the opportunity to negotiate the project’s scope, milestones and timeline during the project planning stage.

Please note, applications are reviewed regularly and this internship may be filled prior to the advertised closing date if a suitable applicant is identified. Early submissions are encouraged.

LOCATION:
Sydney, NSW or Remote
DURATION:
5 months
CLOSING DATE:
08/01/2025
ELIGIBILITY:
PhD & Masters by Research students, both domestic & international
REF NO:
APR - 2637

INTERNSHIP CONTACT

CONNECT WITH APR.INTERN

Suggested Internships

APV CORPORATION (APR – 2526)

Location:
Melbourne, VIC
Occupant Detection in Seat

COX ARCHITECTURE (APR – 2672)

Location:
Sydney, NSW
Development of Integrated Land Use Tool

TELSTRA (APR – 2642)

Location:
Melbourne, VIC
WaveSense: Advanced Signal Processing and AI for Next-Gen Human Presence Detection